url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/1610 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1610/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1610/comments | https://api.github.com/repos/huggingface/transformers/issues/1610/events | https://github.com/huggingface/transformers/pull/1610 | 511,296,603 | MDExOlB1bGxSZXF1ZXN0MzMxNTIwMzEw | 1,610 | Update setup.py | {
"login": "singhanandVEVO",
"id": 24494214,
"node_id": "MDQ6VXNlcjI0NDk0MjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/24494214?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/singhanandVEVO",
"html_url": "https://github.com/singhanandVEVO",
"followers_url": "https://api.github.com/users/singhanandVEVO/followers",
"following_url": "https://api.github.com/users/singhanandVEVO/following{/other_user}",
"gists_url": "https://api.github.com/users/singhanandVEVO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/singhanandVEVO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/singhanandVEVO/subscriptions",
"organizations_url": "https://api.github.com/users/singhanandVEVO/orgs",
"repos_url": "https://api.github.com/users/singhanandVEVO/repos",
"events_url": "https://api.github.com/users/singhanandVEVO/events{/privacy}",
"received_events_url": "https://api.github.com/users/singhanandVEVO/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1610?src=pr&el=h1) Report\n> Merging [#1610](https://codecov.io/gh/huggingface/transformers/pull/1610?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ef1b8b2ae5ad1057154a126879f7eb8de685f862?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1610?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1610 +/- ##\n=======================================\n Coverage 86.17% 86.17% \n=======================================\n Files 91 91 \n Lines 13595 13595 \n=======================================\n Hits 11715 11715 \n Misses 1880 1880\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1610?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1610?src=pr&el=footer). Last update [ef1b8b2...2248c6b](https://codecov.io/gh/huggingface/transformers/pull/1610?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Why should we change this?"
] | 1,571 | 1,576 | 1,576 | NONE | null | changed update setup file | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1610/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1610",
"html_url": "https://github.com/huggingface/transformers/pull/1610",
"diff_url": "https://github.com/huggingface/transformers/pull/1610.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1610.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1609 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1609/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1609/comments | https://api.github.com/repos/huggingface/transformers/issues/1609/events | https://github.com/huggingface/transformers/issues/1609 | 511,294,787 | MDU6SXNzdWU1MTEyOTQ3ODc= | 1,609 | Can the prefix for GPT-2 conditional sampling be very long (longer than context window size)? | {
"login": "leejason",
"id": 4224456,
"node_id": "MDQ6VXNlcjQyMjQ0NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4224456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leejason",
"html_url": "https://github.com/leejason",
"followers_url": "https://api.github.com/users/leejason/followers",
"following_url": "https://api.github.com/users/leejason/following{/other_user}",
"gists_url": "https://api.github.com/users/leejason/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leejason/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leejason/subscriptions",
"organizations_url": "https://api.github.com/users/leejason/orgs",
"repos_url": "https://api.github.com/users/leejason/repos",
"events_url": "https://api.github.com/users/leejason/events{/privacy}",
"received_events_url": "https://api.github.com/users/leejason/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | Can the prefix for GPT-2 conditional sampling be very long (longer than context window size)? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1609/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1608 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1608/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1608/comments | https://api.github.com/repos/huggingface/transformers/issues/1608/events | https://github.com/huggingface/transformers/pull/1608 | 511,290,520 | MDExOlB1bGxSZXF1ZXN0MzMxNTE1MzEz | 1,608 | Error raised by "tmp_eval_loss += tmp_eval_loss.item()" when using multi-gpu | {
"login": "focox",
"id": 30308731,
"node_id": "MDQ6VXNlcjMwMzA4NzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/30308731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/focox",
"html_url": "https://github.com/focox",
"followers_url": "https://api.github.com/users/focox/followers",
"following_url": "https://api.github.com/users/focox/following{/other_user}",
"gists_url": "https://api.github.com/users/focox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/focox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/focox/subscriptions",
"organizations_url": "https://api.github.com/users/focox/orgs",
"repos_url": "https://api.github.com/users/focox/repos",
"events_url": "https://api.github.com/users/focox/events{/privacy}",
"received_events_url": "https://api.github.com/users/focox/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1608?src=pr&el=h1) Report\n> Merging [#1608](https://codecov.io/gh/huggingface/transformers/pull/1608?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ef1b8b2ae5ad1057154a126879f7eb8de685f862?src=pr&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1608?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1608 +/- ##\n=========================================\n+ Coverage 86.17% 86.2% +0.02% \n=========================================\n Files 91 91 \n Lines 13595 13595 \n=========================================\n+ Hits 11715 11719 +4 \n+ Misses 1880 1876 -4\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1608?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1608/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2ZpbGVfdXRpbHMucHk=) | `76.37% <0%> (+2.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1608?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1608?src=pr&el=footer). Last update [ef1b8b2...bd847ce](https://codecov.io/gh/huggingface/transformers/pull/1608?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks @focox!"
] | 1,571 | 1,572 | 1,572 | NONE | null | fixed the bug raised by "tmp_eval_loss += tmp_eval_loss.item()" when parallelly using multi-gpu. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1608/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1608",
"html_url": "https://github.com/huggingface/transformers/pull/1608",
"diff_url": "https://github.com/huggingface/transformers/pull/1608.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1608.patch",
"merged_at": 1572452048000
} |
https://api.github.com/repos/huggingface/transformers/issues/1607 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1607/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1607/comments | https://api.github.com/repos/huggingface/transformers/issues/1607/events | https://github.com/huggingface/transformers/issues/1607 | 511,207,455 | MDU6SXNzdWU1MTEyMDc0NTU= | 1,607 | failed to download pretrained weights | {
"login": "minapril",
"id": 52790610,
"node_id": "MDQ6VXNlcjUyNzkwNjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/52790610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minapril",
"html_url": "https://github.com/minapril",
"followers_url": "https://api.github.com/users/minapril/followers",
"following_url": "https://api.github.com/users/minapril/following{/other_user}",
"gists_url": "https://api.github.com/users/minapril/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minapril/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minapril/subscriptions",
"organizations_url": "https://api.github.com/users/minapril/orgs",
"repos_url": "https://api.github.com/users/minapril/repos",
"events_url": "https://api.github.com/users/minapril/events{/privacy}",
"received_events_url": "https://api.github.com/users/minapril/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, this seems to be a network error. Are you sure you have access to the internet on this machine, or is it behind a firewall?",
"I had exactly the same problem Yesterday and s3.amazonaws.com just was not reachable. We also had the same problem with another service as well. After trying for some time it just started working again."
] | 1,571 | 1,571 | 1,571 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....):BERT
Language I am using the model on (English, Chinese....):English
During downloading pretrained weights with code of modeling_bert.BertForMaskedLM.from_pretrained('bert-base-uncased'),another exception occurred:

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1607/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1606 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1606/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1606/comments | https://api.github.com/repos/huggingface/transformers/issues/1606/events | https://github.com/huggingface/transformers/issues/1606 | 511,131,736 | MDU6SXNzdWU1MTExMzE3MzY= | 1,606 | Show pretrained model and config file download address directly in README.md & doc | {
"login": "Sunnycheey",
"id": 32103564,
"node_id": "MDQ6VXNlcjMyMTAzNTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/32103564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sunnycheey",
"html_url": "https://github.com/Sunnycheey",
"followers_url": "https://api.github.com/users/Sunnycheey/followers",
"following_url": "https://api.github.com/users/Sunnycheey/following{/other_user}",
"gists_url": "https://api.github.com/users/Sunnycheey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sunnycheey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sunnycheey/subscriptions",
"organizations_url": "https://api.github.com/users/Sunnycheey/orgs",
"repos_url": "https://api.github.com/users/Sunnycheey/repos",
"events_url": "https://api.github.com/users/Sunnycheey/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sunnycheey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I feel that this would clutter the README, leading to a bad experience for 99.99% of users. But you can always submit a PR and see what the maintainers think.",
"Just make a PR."
] | 1,571 | 1,572 | 1,572 | NONE | null | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. -->
Show download address of pretrained model address directly in markdown file.
## Motivation
Since my server cannot directly go to aws server, and it's not configed to proxy too. Intuitively i need to download pretrained model in my computer and then upload it to my server. The question is I need to checkout source code for download address. It's a really bad experience.
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1606/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1606/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1605 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1605/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1605/comments | https://api.github.com/repos/huggingface/transformers/issues/1605/events | https://github.com/huggingface/transformers/issues/1605 | 511,085,503 | MDU6SXNzdWU1MTEwODU1MDM= | 1,605 | Support for gpt2-medium, gpt2-large and distilgpt2 in pytorch-pretrained-bert 0.6.2 | {
"login": "g-karthik",
"id": 3851993,
"node_id": "MDQ6VXNlcjM4NTE5OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3851993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-karthik",
"html_url": "https://github.com/g-karthik",
"followers_url": "https://api.github.com/users/g-karthik/followers",
"following_url": "https://api.github.com/users/g-karthik/following{/other_user}",
"gists_url": "https://api.github.com/users/g-karthik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-karthik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-karthik/subscriptions",
"organizations_url": "https://api.github.com/users/g-karthik/orgs",
"repos_url": "https://api.github.com/users/g-karthik/repos",
"events_url": "https://api.github.com/users/g-karthik/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-karthik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"As far as I know pytorch-pretrained-bert development has been discontinued. That makes sense. If you want the new features, you have to upgrade.",
"Well technically what I'm asking for isn't a new feature, it's just backwards-compatibility for the above three model artifacts.\r\n\r\nI can manually add them to the `modeling_gpt2.py` in my conda environment containing pytorch-pretrained-bert 0.6.2 and verify if these model artifacts work with the old package by invoking the `from_pretrained()` method with each of these three artifact names. I am guessing they would work, but I haven't tried yet.\r\n\r\nI feel like this dictionary containing pre-trained artifact names should itself reside in S3, and in `modeling_gpt2.py`, the dictionary should be pulled from S3. Then you could continually add new artifact sizes to that dictionary in S3 and it will work with all versions of this repo, not just some versions. Does that make sense?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | ## 🚀 Feature
Request: Inclusion of the below 3 lines in pytorch-pretrained-bert 0.6.2
https://github.com/huggingface/transformers/blob/ef1b8b2ae5ad1057154a126879f7eb8de685f862/transformers/modeling_gpt2.py#L40-L42
## Motivation
Currently, the above 3 lines exist in the latest version of transformers in PyPI, but not in pytorch-pretrained-bert 0.6.2 (also available in PyPI). Consequently, folks wanting to experiment with the above 3 pre-trained models need to necessarily upgrade to the latest version of transformers immediately. As a relief for such folks who plan to migrate eventually but not immediately, it would be great if the above 3 lines are added in pytorch-pretrained-bert 0.6.2.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1605/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1604 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1604/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1604/comments | https://api.github.com/repos/huggingface/transformers/issues/1604/events | https://github.com/huggingface/transformers/pull/1604 | 510,949,096 | MDExOlB1bGxSZXF1ZXN0MzMxMjM2MzEw | 1,604 | Versioning in documentation | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1604?src=pr&el=h1) Report\n> Merging [#1604](https://codecov.io/gh/huggingface/transformers/pull/1604?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ef1b8b2ae5ad1057154a126879f7eb8de685f862?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1604?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1604 +/- ##\n==========================================\n- Coverage 86.17% 86.16% -0.01% \n==========================================\n Files 91 91 \n Lines 13595 13593 -2 \n==========================================\n- Hits 11715 11713 -2 \n Misses 1880 1880\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1604?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1604/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9jdHJsLnB5) | `96.03% <0%> (-0.08%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1604?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1604?src=pr&el=footer). Last update [ef1b8b2...6e85bcc](https://codecov.io/gh/huggingface/transformers/pull/1604?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ready to merge",
"Awesome!"
] | 1,571 | 1,572 | 1,572 | MEMBER | null | Several versions of the documentation can now be accessed:
`huggingface.co/transformers` for the master release
`huggingface.co/transformers/v2.1.1` for the 2.1.1 official release and so on. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1604/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1604/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1604",
"html_url": "https://github.com/huggingface/transformers/pull/1604",
"diff_url": "https://github.com/huggingface/transformers/pull/1604.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1604.patch",
"merged_at": 1572451395000
} |
https://api.github.com/repos/huggingface/transformers/issues/1603 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1603/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1603/comments | https://api.github.com/repos/huggingface/transformers/issues/1603/events | https://github.com/huggingface/transformers/pull/1603 | 510,862,905 | MDExOlB1bGxSZXF1ZXN0MzMxMTYyMjU2 | 1,603 | [scripts] Proposal: add a specific device flag | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1603?src=pr&el=h1) Report\n> Merging [#1603](https://codecov.io/gh/huggingface/transformers/pull/1603?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e16d46843a19ab289b82138e4eccec5610a76de7?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1603?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1603 +/- ##\n=======================================\n Coverage 86.16% 86.16% \n=======================================\n Files 91 91 \n Lines 13593 13593 \n=======================================\n Hits 11713 11713 \n Misses 1880 1880\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1603?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1603/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9jdHJsLnB5) | `96.03% <0%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1603?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1603?src=pr&el=footer). Last update [e16d468...b0af23c](https://codecov.io/gh/huggingface/transformers/pull/1603?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"In my script, I take another approach for this. I assume that distributed training can only be instantiated by using torch's launch script 'or that at least the WORLD_SIZE env variable is set). `local_rank` will be used for the GPU id, even when not in distributed mode.\r\n\r\n```python\r\n# torch.distributed.launch adds a world_size environment variable\r\ndistributed = int(os.environ['WORLD_SIZE']) > 1 if 'WORLD_SIZE' in os.environ else False\r\n```\r\n\r\nBased on that, you can decide what you want to do with `local_rank`. If we're in distributed mode, start the process group, if we're not: use the `local_rank` cuda device.\r\n\r\n```python\r\nif local_rank == -1 or not torch.cuda.is_available():\r\n device = torch.device('cpu')\r\nelse:\r\n device = torch.device(f\"cuda:{local_rank}\")\r\n if distributed:\r\n dist.init_process_group(backend='nccl', init_method='env://')\r\n```\r\n\r\nAs a bonus, to ensure that all processes such as validating only happen on the main device, even if that's not cuda:0 (even though personally I do that on all devices, too):\r\n\r\n```python\r\nis_first_process = not distributed or local_rank in [0, -1]\r\n# ...\r\nif args.do_eval and is_first_process:\r\n # do eval\r\n```\r\n\r\nI merely post this for possible inspiration, of course!",
"Sounds good to me let's add this to all the examples (and the template in `templates/adding_a_new_example_script`)",
"> Sounds good to me let's add this to all the examples\r\n\r\nThe other scripts maybe make less sense as you would want to train on all available devices? Not 100% sure yet.",
"Ok I see, then maybe let's have the device flag on `run_generation` instead of `run_squad` (as currently proposed in the PR)?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,586 | 1,578 | MEMBER | null | wdyt?
Will do in other scripts if this gets merged.
My use case is I have an instance with multiple GPUs and want to run one generation on `cuda:0`, another one on `cuda:1`, etc. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1603/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1603",
"html_url": "https://github.com/huggingface/transformers/pull/1603",
"diff_url": "https://github.com/huggingface/transformers/pull/1603.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1603.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1602 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1602/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1602/comments | https://api.github.com/repos/huggingface/transformers/issues/1602/events | https://github.com/huggingface/transformers/pull/1602 | 510,852,221 | MDExOlB1bGxSZXF1ZXN0MzMxMTUzNTU2 | 1,602 | Fix architectures count | {
"login": "dataista0",
"id": 4383443,
"node_id": "MDQ6VXNlcjQzODM0NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4383443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dataista0",
"html_url": "https://github.com/dataista0",
"followers_url": "https://api.github.com/users/dataista0/followers",
"following_url": "https://api.github.com/users/dataista0/following{/other_user}",
"gists_url": "https://api.github.com/users/dataista0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dataista0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dataista0/subscriptions",
"organizations_url": "https://api.github.com/users/dataista0/orgs",
"repos_url": "https://api.github.com/users/dataista0/repos",
"events_url": "https://api.github.com/users/dataista0/events{/privacy}",
"received_events_url": "https://api.github.com/users/dataista0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, thanks!",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1602?src=pr&el=h1) Report\n> Merging [#1602](https://codecov.io/gh/huggingface/transformers/pull/1602?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1cfd9748683db43af2c98da1a19d39f0efc8cc3b?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1602?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1602 +/- ##\n=======================================\n Coverage 86.16% 86.16% \n=======================================\n Files 91 91 \n Lines 13593 13593 \n=======================================\n Hits 11713 11713 \n Misses 1880 1880\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1602?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1602?src=pr&el=footer). Last update [1cfd974...25d32f4](https://codecov.io/gh/huggingface/transformers/pull/1602?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,571 | 1,571 | 1,571 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1602/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1602",
"html_url": "https://github.com/huggingface/transformers/pull/1602",
"diff_url": "https://github.com/huggingface/transformers/pull/1602.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1602.patch",
"merged_at": 1571771628000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/1601 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1601/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1601/comments | https://api.github.com/repos/huggingface/transformers/issues/1601/events | https://github.com/huggingface/transformers/pull/1601 | 510,826,670 | MDExOlB1bGxSZXF1ZXN0MzMxMTMyNTU2 | 1,601 | Clean roberta model & all tokenizers now add special tokens by default (breaking change) | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1601?src=pr&el=h1) Report\n> Merging [#1601](https://codecov.io/gh/huggingface/transformers/pull/1601?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/079bfb32fba4f2b39d344ca7af88d79a3ff27c7c?src=pr&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1601?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1601 +/- ##\n==========================================\n- Coverage 85.9% 85.88% -0.02% \n==========================================\n Files 91 91 \n Lines 13653 13640 -13 \n==========================================\n- Hits 11728 11715 -13 \n Misses 1925 1925\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1601?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1601/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `70.55% <ø> (-0.71%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1601/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3JvYmVydGEucHk=) | `89.9% <ø> (-0.77%)` | :arrow_down: |\n| [transformers/tests/tokenization\\_bert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1601/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl9iZXJ0X3Rlc3QucHk=) | `98.66% <100%> (ø)` | :arrow_up: |\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1601/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `91.43% <100%> (ø)` | :arrow_up: |\n| [transformers/tests/tokenization\\_roberta\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1601/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl9yb2JlcnRhX3Rlc3QucHk=) | `92.45% <100%> (ø)` | :arrow_up: |\n| [transformers/tests/tokenization\\_xlnet\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1601/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl94bG5ldF90ZXN0LnB5) | `97.91% <100%> (ø)` | :arrow_up: |\n| [transformers/tests/tokenization\\_tests\\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/1601/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `100% <100%> (ø)` | :arrow_up: |\n| [transformers/tests/tokenization\\_xlm\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1601/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl94bG1fdGVzdC5weQ==) | `97.72% <100%> (ø)` | :arrow_up: |\n| [transformers/tests/tokenization\\_distilbert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1601/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl9kaXN0aWxiZXJ0X3Rlc3QucHk=) | `95.23% <100%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1601?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1601?src=pr&el=footer). Last update [079bfb3...3617469](https://codecov.io/gh/huggingface/transformers/pull/1601?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great, LGTM"
] | 1,571 | 1,572 | 1,572 | MEMBER | null | The RoBERTa model checks that special tokens are in the input sequence as it cannot function as expected if they are not here. This is not the best practice:
- The print method is not handled on TPU, and the check is problematic when tracing the models
- RoBERTa is the only model to print this warning while other models that require special tokens (BERT, XLNet) don't.
The warning was removed and the encode/encode_plus/prepare_for_model methods now have `add_special_tokens` set to `True` by default. This is a **breaking change**, but it is a better practice. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1601/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1601/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1601",
"html_url": "https://github.com/huggingface/transformers/pull/1601",
"diff_url": "https://github.com/huggingface/transformers/pull/1601.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1601.patch",
"merged_at": 1572451240000
} |
https://api.github.com/repos/huggingface/transformers/issues/1600 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1600/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1600/comments | https://api.github.com/repos/huggingface/transformers/issues/1600/events | https://github.com/huggingface/transformers/issues/1600 | 510,777,589 | MDU6SXNzdWU1MTA3Nzc1ODk= | 1,600 | None in openAi-gpt tokenization | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Are you using the GPT tokenizer? If not try\r\n\r\n```\r\ntokenizer = transformers.OpenAIGTPTTokenizer()\r\ninput_ids = tokenizer.encode(your_text)\r\n```\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | Hi
I want to concat two sentences, and give it to openAI-gpt, I use cl sentence1 sep sentence2 sep
I got none with openai-gpt in the first position, could you tell me what is the expected format? thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1600/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1599 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1599/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1599/comments | https://api.github.com/repos/huggingface/transformers/issues/1599/events | https://github.com/huggingface/transformers/issues/1599 | 510,738,059 | MDU6SXNzdWU1MTA3MzgwNTk= | 1,599 | Issue in Cost Function | {
"login": "anandhperumal",
"id": 12907396,
"node_id": "MDQ6VXNlcjEyOTA3Mzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/12907396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anandhperumal",
"html_url": "https://github.com/anandhperumal",
"followers_url": "https://api.github.com/users/anandhperumal/followers",
"following_url": "https://api.github.com/users/anandhperumal/following{/other_user}",
"gists_url": "https://api.github.com/users/anandhperumal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anandhperumal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anandhperumal/subscriptions",
"organizations_url": "https://api.github.com/users/anandhperumal/orgs",
"repos_url": "https://api.github.com/users/anandhperumal/repos",
"events_url": "https://api.github.com/users/anandhperumal/events{/privacy}",
"received_events_url": "https://api.github.com/users/anandhperumal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @anandhperumal. Remember that you train GPT-2 by doing next-token prediction, therefore you need to compare the i-th input label--the truth--with what the model predicted: the (i-1)th output. Hence the indices shift.",
"@rlouf oh yeah. Thanks for the input.\r\nif you don't mind can you answer this question as well [transformers](https://github.com/huggingface/transfer-learning-conv-ai/issues/43) it's not directly related to transformers.\r\nThanks again.",
"You're welcome. I haven't worked on the other codebase, but I'll try to help if I can."
] | 1,571 | 1,571 | 1,571 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): GPT2
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [X] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details)
## To Reproduce
And in cost function, in logits we are ignoring the last element, why is that? though we're not using any padding
```
shift_logits = lm_logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
```
and for labels we're dropping the first token why is that ?
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
Loss function shouldn't drop the last and first element in logits and labels unless it is padded, correct me if I'm wrong.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1599/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1598 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1598/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1598/comments | https://api.github.com/repos/huggingface/transformers/issues/1598/events | https://github.com/huggingface/transformers/pull/1598 | 510,639,726 | MDExOlB1bGxSZXF1ZXN0MzMwOTc4NzY5 | 1,598 | changing "out_features" of final linear layer | {
"login": "SKRohit",
"id": 9626333,
"node_id": "MDQ6VXNlcjk2MjYzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9626333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SKRohit",
"html_url": "https://github.com/SKRohit",
"followers_url": "https://api.github.com/users/SKRohit/followers",
"following_url": "https://api.github.com/users/SKRohit/following{/other_user}",
"gists_url": "https://api.github.com/users/SKRohit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SKRohit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SKRohit/subscriptions",
"organizations_url": "https://api.github.com/users/SKRohit/orgs",
"repos_url": "https://api.github.com/users/SKRohit/repos",
"events_url": "https://api.github.com/users/SKRohit/events{/privacy}",
"received_events_url": "https://api.github.com/users/SKRohit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1598?src=pr&el=h1) Report\n> Merging [#1598](https://codecov.io/gh/huggingface/transformers/pull/1598?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b8c9ea0010a09cca8173e5bdf4af855123aebfc7?src=pr&el=desc) will **decrease** coverage by `4.94%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1598?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1598 +/- ##\n==========================================\n- Coverage 86.16% 81.22% -4.95% \n==========================================\n Files 91 57 -34 \n Lines 13593 8028 -5565 \n==========================================\n- Hits 11713 6521 -5192 \n+ Misses 1880 1507 -373\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1598?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1598/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdXRpbHMucHk=) | `90.3% <100%> (ø)` | |\n| [transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1598/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbS5weQ==) | | |\n| [transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1598/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fZGlzdGlsYmVydC5weQ==) | | |\n| [transformers/configuration\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1598/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYmVydC5weQ==) | | |\n| [transformers/tests/tokenization\\_transfo\\_xl\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1598/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90cmFuc2ZvX3hsX3Rlc3QucHk=) | | |\n| [transformers/tests/modeling\\_bert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1598/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2JlcnRfdGVzdC5weQ==) | | |\n| [transformers/tests/tokenization\\_utils\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1598/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl91dGlsc190ZXN0LnB5) | | |\n| [transformers/tests/modeling\\_tf\\_ctrl\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1598/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2N0cmxfdGVzdC5weQ==) | | |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1598/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | | |\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1598/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | | |\n| ... and [139 more](https://codecov.io/gh/huggingface/transformers/pull/1598/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1598?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1598?src=pr&el=footer). Last update [b8c9ea0...9388320](https://codecov.io/gh/huggingface/transformers/pull/1598?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks you for this, I actually had this fix included in #1721"
] | 1,571 | 1,573 | 1,573 | CONTRIBUTOR | null | calling `resize_token_embeddings` changes the dimensions of the final linear layer. so changed `out_features` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1598/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1598",
"html_url": "https://github.com/huggingface/transformers/pull/1598",
"diff_url": "https://github.com/huggingface/transformers/pull/1598.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1598.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1597 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1597/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1597/comments | https://api.github.com/repos/huggingface/transformers/issues/1597/events | https://github.com/huggingface/transformers/issues/1597 | 510,620,485 | MDU6SXNzdWU1MTA2MjA0ODU= | 1,597 | _tokenize() got an unexpected keyword argument 'add_prefix_space' in CTRL | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @BramVanroy, thanks for reporting this. There was an issue in the docstring. It does not use prefix spaces and it does not use a byte-level BPE like GPT-2 does. The docstring should be fixed now."
] | 1,571 | 1,571 | 1,571 | COLLABORATOR | null | ## 🐛 Bug
If you look at [the search results in this repo](https://github.com/huggingface/transformers/search?q=add_prefix_space) for `add_prefix_space`, you'll find gpt2, roberta, and ctrl all document that
> `add_prefix_space`: Requires a space to start the input string => the encoding methods should be called with the --``add_prefix_space`` flag set to ``True``.
However, this attribute is only implemented in the GPT2Tokenizer. Since RobertaTokenizer subclasses GPT2Tokenizer, that is fine. However, CTRLTokenizer just subclasses the PretrainedTokenizer. As such, it does not have a `_tokenize()` method that accepts the `add_prefix_space` keyword.
I would fix this in a PR, but I am not sure what the actual correct fix is: does CTRL need the added space, or not? And can it subclass GPT2's tokenizer, or should it implement its own `_tokenize(*, add_prefix_space)` method?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1597/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1596 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1596/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1596/comments | https://api.github.com/repos/huggingface/transformers/issues/1596/events | https://github.com/huggingface/transformers/issues/1596 | 510,546,258 | MDU6SXNzdWU1MTA1NDYyNTg= | 1,596 | How to use BERT for ENTITY extraction from a Sequence without classification in the NER task ? | {
"login": "ManojPrabhakar",
"id": 5091907,
"node_id": "MDQ6VXNlcjUwOTE5MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5091907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ManojPrabhakar",
"html_url": "https://github.com/ManojPrabhakar",
"followers_url": "https://api.github.com/users/ManojPrabhakar/followers",
"following_url": "https://api.github.com/users/ManojPrabhakar/following{/other_user}",
"gists_url": "https://api.github.com/users/ManojPrabhakar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ManojPrabhakar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ManojPrabhakar/subscriptions",
"organizations_url": "https://api.github.com/users/ManojPrabhakar/orgs",
"repos_url": "https://api.github.com/users/ManojPrabhakar/repos",
"events_url": "https://api.github.com/users/ManojPrabhakar/events{/privacy}",
"received_events_url": "https://api.github.com/users/ManojPrabhakar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm a bit confused: you're basically defining the broad case of named entity recognition. Is it not enough to have a binary NER (token-level classification) task for entity vs non-entity?",
"Assuming you have 3-class (PER, ORG, LOC) data with labels:\r\nB-PER, I-PER, B-ORG, I-ORG, B-LOC, I-LOC, as well as O\r\n\r\nReplace PER, ORG, LOC with ENT.\r\nThis leaves you with these labels:\r\nB-ENT, I-ENT, O\r\n\r\nYou can do this before training, and then train a model specifically for 1-class named entity detection only.\r\nOr you can do this as a post-processing step on the output of the normal 3-class model.",
"@bheinzerling Thank you!!\r\n I will try this."
] | 1,571 | 1,575 | 1,575 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
My requirement here is given a sentence(sequence), I would like to just extract the entities present in the sequence without classifying them to a type in the NER task. I see that BERT has BertForTokenClassification for NER which does the classification.
So, can somebody give me an idea of how to do **entity extraction/identification using BERT**? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1596/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1595 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1595/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1595/comments | https://api.github.com/repos/huggingface/transformers/issues/1595/events | https://github.com/huggingface/transformers/issues/1595 | 510,502,762 | MDU6SXNzdWU1MTA1MDI3NjI= | 1,595 | Using HuggingFace TransfoXLLMHeadModel() with custom Torchtext vocabulary | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I faced the same problem:I solved it by passing the size of your vocabulary (from your custom tokenizer) as a parameter.\r\nI proceeded as follows:\r\n`vocabulary_size = tokenizer.vocab_size`\r\n```\r\nconfiguration = tf.TransfoXLConfig(vocab_size_or_config_json_file=vocabulary_size, cutoffs=cutoffs,\r\n d_model=512, d_embed=512, n_head=8, d_head=64, n_layer=12, d_inner=2048)\r\n```\r\nI hope that helped :)",
"P.s. What do you pass as inputs and labels?\r\nFor now, I create a batch as follows:\r\n\"The quick brown fox jumps over the lazy dog\"\r\nIf I have batch_size=2, and sequence length=4:\r\n[\"The quick brown fox\",\r\n\"jumps over the lazy\"]\r\n\r\nWhat do you feed to the Transformer-XL as input?"
] | 1,571 | 1,580 | 1,577 | NONE | null | Hello,
I am trying to use the HuggingFace TransfoXLLMHeadModel on WikiText2 dataset under a customized TransfoXLConfig with different vocabulary, and it causing an error. I am not sure how to fix it. Below are my code:
```js
# Import packages
import torch
import torch.nn as nn
import torch.nn.functional as F
from transformers import TransfoXLConfig, TransfoXLTokenizer, TransfoXLModel, TransfoXLLMHeadModel, TFTransfoXLModel, TFTransfoXLLMHeadModel
import spacy
import torchtext
from torchtext.data.utils import get_tokenizer
from torchtext.data import Field, BPTTIterator, TabularDataset
import math
import random
import numpy as np
import pandas as pd
import time
# define the English text field
TEXT = Field(tokenize = 'spacy',
init_token='<sos>',
eos_token='<eos>',
tokenizer_language='en',
lower=True)
# load WikiText-2 dataset and split it into train and test set
train_Wiki2, val_Wiki2, test_Wiki2 = torchtext.datasets.WikiText2.splits(TEXT)
# build vocabulary based on the field that we just defined.
TEXT.build_vocab(train_Wiki2, val_Wiki2, test_Wiki2)
# get number of tokens
ntokens = len(TEXT.vocab.stoi) # ntokens = 28871
# define transformer-XL configuration.
transfoXLconfig = TransfoXLConfig(vocab_size_or_config_json_file = ntokens,
cutoffs = [20000, 40000, 200000],
d_model = 64,
d_embed = 64,
n_head = 16,
d_head = 64,
n_layer = 5,
attn_type = 0,
dropout = 0.1,
output_hidden_states = True,
output_attentions = True)
# define the transformer-XL model based on the specified configuration.
model = TransfoXLLMHeadModel(transfoXLconfig) # this line is causing an error.
"""
Error message:
Traceback (most recent call last):
File "<ipython-input-14-fa91df67f439>", line 1, in <module>
model = TransfoXLLMHeadModel(transfoXLconfig)
File "/Users/jin-dominique/anaconda3/lib/python3.7/site-packages/transformers/modeling_transfo_xl.py", line 818, in __init__
self.transformer = TransfoXLModel(config)
File "/Users/jin-dominique/anaconda3/lib/python3.7/site-packages/transformers/modeling_transfo_xl.py", line 599, in __init__
div_val=config.div_val)
File "/Users/jin-dominique/anaconda3/lib/python3.7/site-packages/transformers/modeling_transfo_xl.py", line 421, in __init__
self.emb_layers.append(nn.Embedding(r_idx-l_idx, d_emb_i))
File "/Users/jin-dominique/anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 97, in __init__
self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim))
RuntimeError: Trying to create tensor with negative dimension -171129: [-171129, 1]
model = TransfoXLLMHeadModel(transfoXLconfig)
"""
```
How can I use HuggingFace TransfoXLLMHeadModel( ) with a custom vocabulary of different size?
Thank you,
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1595/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1594 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1594/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1594/comments | https://api.github.com/repos/huggingface/transformers/issues/1594/events | https://github.com/huggingface/transformers/issues/1594 | 510,483,278 | MDU6SXNzdWU1MTA0ODMyNzg= | 1,594 | Make benchmark more flexible (TF or PT) | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I believe a quick workaround is to just install the pre-built, CPU version of TensorFlow 2.0. If you won't be running the TF benchmarks, it wouldn't affect anything.",
"True, but still not quite flexible. Since the goal of the benchmark script is to, I believe, encourage the community to add there runtimes, it's a good to make this as easy-to-use as possible.",
"You're right that we shouldn't require to have both libraries installed in order to benchmark only one of them. I've updated the Benchmark code so that you can run it with only a single library installed.",
"That's great, Lysandre. Thanks for pushing out changes so quickly!"
] | 1,571 | 1,571 | 1,571 | COLLABORATOR | null | I've been trying to run the benchmark, but I gave up after running into a trillion compatibility issues with tensorflow and bazel. To be fair, I just want to contribute and test all there is to test on PyTorch with 4x Tesla V100. It would be great if only the required modules are needed, and not all of them. So only try to import PyTorch or Tensorflow when the tester actually wants to test those frameworks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1594/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1593 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1593/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1593/comments | https://api.github.com/repos/huggingface/transformers/issues/1593/events | https://github.com/huggingface/transformers/pull/1593 | 510,480,033 | MDExOlB1bGxSZXF1ZXN0MzMwODQ3MjE4 | 1,593 | Fix AdamW import error for <1.2 | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1593?src=pr&el=h1) Report\n> Merging [#1593](https://codecov.io/gh/huggingface/transformers/pull/1593?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/702f589848baba97ea4897aa3f0bb937e1ec3bcf?src=pr&el=desc) will **decrease** coverage by `0.77%`.\n> The diff coverage is `82.23%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1593?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1593 +/- ##\n==========================================\n- Coverage 84.73% 83.95% -0.78% \n==========================================\n Files 84 94 +10 \n Lines 12573 13951 +1378 \n==========================================\n+ Hits 10654 11713 +1059 \n- Misses 1919 2238 +319\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1593?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1593/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGwucHk=) | `92.21% <ø> (+0.97%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1593/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2JlcnQucHk=) | `96.6% <ø> (+0.89%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1593/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2dwdDIucHk=) | `94.79% <ø> (+1.31%)` | :arrow_up: |\n| [transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1593/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9ncHQyLnB5) | `96.72% <ø> (ø)` | :arrow_up: |\n| [transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1593/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fZ3B0Mi5weQ==) | `88.63% <ø> (ø)` | :arrow_up: |\n| [transformers/configuration\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1593/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fcm9iZXJ0YS5weQ==) | `100% <ø> (ø)` | :arrow_up: |\n| [transformers/configuration\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1593/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYmVydC5weQ==) | `87.09% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1593/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX29wZW5haS5weQ==) | `96.04% <ø> (+1.43%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1593/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2Rpc3RpbGJlcnQucHk=) | `98.59% <ø> (+1.98%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_gpt2\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1593/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2dwdDJfdGVzdC5weQ==) | `94.73% <0%> (ø)` | :arrow_up: |\n| ... and [79 more](https://codecov.io/gh/huggingface/transformers/pull/1593/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1593?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1593?src=pr&el=footer). Last update [702f589...3408e84](https://codecov.io/gh/huggingface/transformers/pull/1593?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I just realized that it's better to try to import AdamW in optimization, and if not available define the custom AdamW class."
] | 1,571 | 1,586 | 1,586 | COLLABORATOR | null | closes #1585 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1593/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1593",
"html_url": "https://github.com/huggingface/transformers/pull/1593",
"diff_url": "https://github.com/huggingface/transformers/pull/1593.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1593.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1592 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1592/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1592/comments | https://api.github.com/repos/huggingface/transformers/issues/1592/events | https://github.com/huggingface/transformers/pull/1592 | 510,468,349 | MDExOlB1bGxSZXF1ZXN0MzMwODM3NTk0 | 1,592 | Consider do_lower_case in PreTrainedTokenizer | {
"login": "watkinsm",
"id": 38503580,
"node_id": "MDQ6VXNlcjM4NTAzNTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/38503580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/watkinsm",
"html_url": "https://github.com/watkinsm",
"followers_url": "https://api.github.com/users/watkinsm/followers",
"following_url": "https://api.github.com/users/watkinsm/following{/other_user}",
"gists_url": "https://api.github.com/users/watkinsm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/watkinsm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/watkinsm/subscriptions",
"organizations_url": "https://api.github.com/users/watkinsm/orgs",
"repos_url": "https://api.github.com/users/watkinsm/repos",
"events_url": "https://api.github.com/users/watkinsm/events{/privacy}",
"received_events_url": "https://api.github.com/users/watkinsm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"this lgtm but let's wait for @thomwolf and @LysandreJik to chime in",
"I'd also like to improve the test cases. I'll try to find some time for that this weekend",
"Nice improvement, it would be even better with tests for DistilBERT and XLNet as both those models make use of the `do_lower_case` argument. TransfoXL also uses the `lower_case` argument and XLM the `do_lowercase_and_remove_accent` argument so it might be a good idea to test that those models have the correct behavior.\r\n\r\nPutting tests in the `tokenization_tests_common` would probably be cleaner than in each model's test file, if we test all models rather than a single one.",
"@LysandreJik good points.\r\n\r\n~Now that I'm thinking about it, it seems like it would make more sense to do the lowercasing/accent removal directly in the subclasses (`BertTokenizer`, `XLMtokenizer`, etc.) by overriding the `tokenize()` method from `PreTrainedTokenizer`, performing the normalization there, then calling the super `tokenize()` with the now-normalized text.~\r\n\r\nNever mind, this would result in some silly code duplication.",
"Alright that looks good to me!",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1592?src=pr&el=h1) Report\n> Merging [#1592](https://codecov.io/gh/huggingface/transformers/pull/1592?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/de2696f68e20019fef3a5e1b54de10351abb4145?src=pr&el=desc) will **decrease** coverage by `1.22%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1592?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1592 +/- ##\n==========================================\n- Coverage 84.26% 83.03% -1.23% \n==========================================\n Files 104 104 \n Lines 15431 15456 +25 \n==========================================\n- Hits 13003 12834 -169 \n- Misses 2428 2622 +194\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1592?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tests/tokenization\\_tests\\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/1592/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `100% <100%> (ø)` | :arrow_up: |\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1592/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `92.21% <100%> (+0.07%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1592/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `9.85% <0%> (-83.1%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1592/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `81.55% <0%> (-15.54%)` | :arrow_down: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1592/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `59.41% <0%> (-12.36%)` | :arrow_down: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1592/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `71.18% <0%> (-2.44%)` | :arrow_down: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1592/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `94.24% <0%> (-2.22%)` | :arrow_down: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1592/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `80.66% <0%> (-1.34%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1592?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1592?src=pr&el=footer). Last update [de2696f...21637d4](https://codecov.io/gh/huggingface/transformers/pull/1592?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok merging!"
] | 1,571 | 1,574 | 1,574 | CONTRIBUTOR | null | As pointed out in #1545, when using an uncased model, and adding a new uncased token, the tokenizer does not correctly identify this in the case that the input text contains the token in a cased format.
For instance, if we load bert-base-uncased into BertTokenizer, and then use .add_tokens() to add "cool-token", we get the expected result for .tokenize('this is a cool-token'). However, we get a possibly unexpected result for .tokenize('this is a cOOl-Token'), which in fact mirrors the result for the former from before the new token was added.
This PR adds
- functionality to PreTrainedTokenizer to handle this situation in case a tokenizer (currently Bert, DistilBert, and XLNet) has the do_lower_case=True kwarg by:
1) lowercasing tokens added with .add_tokens()
2) lowercasing text at the beginning of .tokenize()
- new common test case for tokenizers
XLMTokenizer's `do_lowercase_and_remove_accent` is a bit more complicated and is not included in this PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1592/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1592",
"html_url": "https://github.com/huggingface/transformers/pull/1592",
"diff_url": "https://github.com/huggingface/transformers/pull/1592.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1592.patch",
"merged_at": 1574870719000
} |
https://api.github.com/repos/huggingface/transformers/issues/1591 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1591/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1591/comments | https://api.github.com/repos/huggingface/transformers/issues/1591/events | https://github.com/huggingface/transformers/issues/1591 | 510,402,622 | MDU6SXNzdWU1MTA0MDI2MjI= | 1,591 | Error when trying to reuse hidden states in CTRL | {
"login": "bilal2vec",
"id": 29356759,
"node_id": "MDQ6VXNlcjI5MzU2NzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/29356759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilal2vec",
"html_url": "https://github.com/bilal2vec",
"followers_url": "https://api.github.com/users/bilal2vec/followers",
"following_url": "https://api.github.com/users/bilal2vec/following{/other_user}",
"gists_url": "https://api.github.com/users/bilal2vec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bilal2vec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilal2vec/subscriptions",
"organizations_url": "https://api.github.com/users/bilal2vec/orgs",
"repos_url": "https://api.github.com/users/bilal2vec/repos",
"events_url": "https://api.github.com/users/bilal2vec/events{/privacy}",
"received_events_url": "https://api.github.com/users/bilal2vec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"The same error occurs with the library installed with `git clone` (_master_ version) + torch v1.3.0 + python v3.6.8.\r\n[Here](https://colab.research.google.com/drive/1nawWX6Lrfh9ZVKyRfTLgFSIG355xkPRy#scrollTo=n93UZjq5EIE_) is a more verbose version of the Colab Notebook posted by @bkkaggle with Google Colab.",
"I've just pushed a fix on the branch `fix-ctrl-past`. It should be in the next release.",
"Thanks, closing"
] | 1,571 | 1,573 | 1,573 | CONTRIBUTOR | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): CTRL
Language I am using the model on (English, Chinese....): English
The problem arise when using:
My own script, the colab link is available [here](https://colab.research.google.com/drive/143T4sBda4r2nDYzmuNwi-ZFTbhJWfeOW)
The stack trace is:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-9-ac2d93f8c410> in <module>()
3 for i in range(3):
4 print(i)
----> 5 logits, past = model(**inputs, past=past)
6 logits = logits[0, -1]
7
8 frames
/usr/local/lib/python3.6/dist-packages/transformers/modeling_ctrl.py in scaled_dot_product_attention(q, k, v, mask, attention_mask, head_mask)
64
65 if mask is not None:
---> 66 scaled_attention_logits += (mask * -1e4)
67
68 if attention_mask is not None:
RuntimeError: The size of tensor a (13) must match the size of tensor b (7) at non-singleton dimension 3
```
The tasks I am working on is:
Generating text with CTRL
## To Reproduce
Just run the colab from the link I posted above
The main part of the code is:
```
input_ids = torch.tensor(tokenizer.encode("Links Hello, my dog is cute")).unsqueeze(0).to(device)
inputs = {'input_ids': input_ids}
with torch.no_grad():
past = None
for i in range(3):
print(i)
logits, past = model(**inputs, past=past)
logits = logits[0, -1]
next_token = logits.argmax()
input_ids = torch.cat([input_ids, next_token.view(1, 1)], dim=1)
inputs = {'input_ids': input_ids}
```
## Expected behavior
passing in `past` should not throw an error and should speed up generation
## Environment
* OS: Linux
* Python version: 3.6.8
* PyTorch version: 1.3.0+cu100
* PyTorch Transformers version (or branch): 2.1.1
* Using GPU: yes
* Distributed of parallel setup: No
* Any other relevant information: I'm running on an extended memory colab instance with a K80
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1591/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1590 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1590/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1590/comments | https://api.github.com/repos/huggingface/transformers/issues/1590/events | https://github.com/huggingface/transformers/pull/1590 | 510,400,839 | MDExOlB1bGxSZXF1ZXN0MzMwNzgxOTg3 | 1,590 | [WIP] Fixes for TF Roberta (and other models WIP) | {
"login": "tlkh",
"id": 5409617,
"node_id": "MDQ6VXNlcjU0MDk2MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5409617?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tlkh",
"html_url": "https://github.com/tlkh",
"followers_url": "https://api.github.com/users/tlkh/followers",
"following_url": "https://api.github.com/users/tlkh/following{/other_user}",
"gists_url": "https://api.github.com/users/tlkh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tlkh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tlkh/subscriptions",
"organizations_url": "https://api.github.com/users/tlkh/orgs",
"repos_url": "https://api.github.com/users/tlkh/repos",
"events_url": "https://api.github.com/users/tlkh/events{/privacy}",
"received_events_url": "https://api.github.com/users/tlkh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1590?src=pr&el=h1) Report\n> Merging [#1590](https://codecov.io/gh/huggingface/transformers/pull/1590?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4d456542e9d381090f9a00b2bcc5a4cb07f6f3f7?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1590?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1590 +/- ##\n=======================================\n Coverage 86.16% 86.16% \n=======================================\n Files 91 91 \n Lines 13593 13593 \n=======================================\n Hits 11713 11713 \n Misses 1880 1880\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1590?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1590?src=pr&el=footer). Last update [4d45654...0322842](https://codecov.io/gh/huggingface/transformers/pull/1590?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Fixing TF Roberta and TF XLNet seem to be much trickier than XLM. **I will open a separate PR for XLM alone since that works fine.** \r\n\r\nFor TF Roberta and TF XLNet, the solution might be to simply run them eagerly at a rather severe performance penalty. `tf.function` speeds it up a lot but seems to introduce some inconsistency in the weight saving, which might be a TensorFlow issue and I don't yet have the time to investigate.",
"@tlkh did you look into the shape errors any further? I'm getting similar errors in eager mode on tf-nightly, didn't try 2.0 (need some other fixes in 2.1)\r\n\r\n```\r\ntensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [91,91,64,12] vs. [91,181,64,12]\r\n\t [[node model/tfxl_net_lm_head_model/transformer/layer_._0/rel_attn/add_2 (defined at .../transformers/modeling_tf_xlnet.py:148) ]] [Op:__inference_distributed_function_17027]\r\n```",
"@NathanHowell sorry, I don't have any ideas about that! Seems to be the same error, but oddly running it in eager mode fixed it for me.",
"Thanks a lot @tlkh\r\n\r\nSo I think RoBERTa is now fixed on master (removed the faulty check in the forward pass) and XLM as well (with your other PR).\r\n\r\nDo you want to make a new PR with fixes for XLNet and we close the present one maybe?",
"@thomwolf thanks, I'll close the current PR and open a new one for XLNet after I validate it again. "
] | 1,571 | 1,573 | 1,573 | CONTRIBUTOR | null | When converting the `run_tf_glue.py` example to the same format at `benchmarks.py` to create a standardized benchmark for training, I ran into errors with **training** the non-BERT models with the normal `model.fit()` method. I am attempting to resolve all the errors I encountered in this PR. In particular, I have fixed the errors I have encountered with `TFRobertaForSequenceClassification`, `TFXLMForSequenceClassification`, and `TFXLNetForSequenceClassification`.
### Changes
**Roberta**
* Roberta requires `@tf.function()` on `TFRobertaMainLayer.call()`
* Otherwise, errors encountered:
* `TypeError: You are attempting to use Python control flow in a layer that was not declared to be dynamic. Pass 'dynamic=True' to the class constructor.`
* `OperatorNotAllowedInGraphError: using a 'tf.Tensor' as a Python 'bool' is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.`
* Issues:
* Fails test `TFRobertaModelTest.test_pt_tf_model_equivalence`: `AssertionError: layer.0.attention.self.query.weight not found in PyTorch model`.
**XLM**
* XLX requires changing some Python `assert` statements to `tf.debugging.assert_equal` both in `TFXLMMainLayer.call()` and `gen_mask()`
* Otherwise, errors encountered:
* `TypeError: You are attempting to use Python control flow in a layer that was not declared to be dynamic. Pass 'dynamic=True' to the class constructor.`
* `OperatorNotAllowedInGraphError: using a 'tf.Tensor' as a Python 'bool' is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.`
**XLNet**
* XLNet had a dtype error (float vs int) in line `input_mask = 1.0 - attention_mask`. Since `input_mask` and `attention_mask` are both supposed (afaik) to be int32, I've replace `1.0` with `1`.
* Still has shape error (see below) that I have not managed to track down. **This is particularly confusion because the training works in eager mode!**
* Solution is to simply provide a workaround `model.run_eagerly = True`.
* Of course, this will make the model train much slower (~140s for first epoch). Decorating `TFXLNetForSequenceClassification`'s `call()` method with `tf.function` works, and results in ~80s per first epoch. We cannot decorate the individual `call()` methods (aka create "overlapping" `tf.function`) as that will cause model saving to not work.
* Irregardless of my changes, there is a warning `gradients do not exist for variables ['transformer/mask_emb:0'] when minimizing the loss.` But from my observation the model trains fine. Is this embedding supposed to be trainable in the first place?
* Issues:
* Fails test `TFXLNetModelTest.test_pt_tf_model_equivalence`: `AssertionError: mask_emb not found in PyTorch model`.
Shape error:
```
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [128,128,16,12] vs. [128,255,16,12]
[[node tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/add_3 (defined at /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1751) ]] [Op:__inference_distributed_function_72170]
```
Do let me know if there are any feedback on the changes I made. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1590/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1590/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1590",
"html_url": "https://github.com/huggingface/transformers/pull/1590",
"diff_url": "https://github.com/huggingface/transformers/pull/1590.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1590.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1589 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1589/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1589/comments | https://api.github.com/repos/huggingface/transformers/issues/1589/events | https://github.com/huggingface/transformers/pull/1589 | 510,368,002 | MDExOlB1bGxSZXF1ZXN0MzMwNzU1NzA5 | 1,589 | Fix architectures count | {
"login": "dataista0",
"id": 4383443,
"node_id": "MDQ6VXNlcjQzODM0NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4383443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dataista0",
"html_url": "https://github.com/dataista0",
"followers_url": "https://api.github.com/users/dataista0/followers",
"following_url": "https://api.github.com/users/dataista0/following{/other_user}",
"gists_url": "https://api.github.com/users/dataista0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dataista0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dataista0/subscriptions",
"organizations_url": "https://api.github.com/users/dataista0/orgs",
"repos_url": "https://api.github.com/users/dataista0/repos",
"events_url": "https://api.github.com/users/dataista0/events{/privacy}",
"received_events_url": "https://api.github.com/users/dataista0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Actually, if we count DistilGPT-2 as a standalone architecture, it should be 10. Do you think you could update it to 10 before we merge? Thanks."
] | 1,571 | 1,576 | 1,576 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1589/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1589",
"html_url": "https://github.com/huggingface/transformers/pull/1589",
"diff_url": "https://github.com/huggingface/transformers/pull/1589.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1589.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/1588 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1588/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1588/comments | https://api.github.com/repos/huggingface/transformers/issues/1588/events | https://github.com/huggingface/transformers/issues/1588 | 510,211,458 | MDU6SXNzdWU1MTAyMTE0NTg= | 1,588 | Using HuggingFace pre-trained transformer to tokenize and generate iterator for a different text than the one it was trained on | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | Hello,
I am trying to do NLP by using HuggingFace transformers, and I have a question. Is it possible to use the pre-trained HuggingFace Transformer-XL and its pre-trained vocabulary to tokenize and generate BPTTIterator for the WikiText2 dataset instead of the WikiText103 that the transformer was originally trained on? If yes, could someone provide me example codes to illustrate how to 1.tokenize and 2. generate BPTTIterator to analyze the WikiText2, based on the pre-trained HuggingFace transformer XL model and its vocabulary?
NOTE: the WikiText2 can be obtained via
```js
import torchtext
# load WikiText-2 dataset and split it into train and test set
train_Wiki2, val_Wiki2, test_Wiki2 = torchtext.datasets.WikiText2.splits(TEXT)
```
or
```js
import lineflow as lf
import lineflow.datasets as lfds
# load WikiText-103 dataset
train_Wiki103 = lfds.WikiText2('train')
test_Wiki103 = lfds.WikiText2('test')
```
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1588/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1587 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1587/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1587/comments | https://api.github.com/repos/huggingface/transformers/issues/1587/events | https://github.com/huggingface/transformers/issues/1587 | 509,811,647 | MDU6SXNzdWU1MDk4MTE2NDc= | 1,587 | Sequence to sequence with GPT model | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We are currently working on implementing seq2seq for most models in the library (see https://github.com/huggingface/transformers/pull/1455). I won't be ready before a week or two.",
"I'm closing this issue, but feel free to reply in #1506 that we leave open for comments on this implementation."
] | 1,571 | 1,571 | 1,571 | NONE | null | Hi, I really appreciate if you could tell me if I can build a seq2seq model with gpt2 like this:
I am getting GPT2 run_generation codes, and I want to finetune it in a way, that I give a
sequence as a context, and then generate another sequence with gpt2, and then I minimize
the cross-entropy loss between the generated sequence and the expected out, and I want to
modify the run_finetune_lm in a way to do it, I was wondering if this way I can make a seq2seq
model with GPT, thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1587/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1586 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1586/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1586/comments | https://api.github.com/repos/huggingface/transformers/issues/1586/events | https://github.com/huggingface/transformers/pull/1586 | 509,730,759 | MDExOlB1bGxSZXF1ZXN0MzMwMjE3NTU2 | 1,586 | Add special tokens to documentation for bert examples to resolve issue: #1561 | {
"login": "enzoampil",
"id": 39557688,
"node_id": "MDQ6VXNlcjM5NTU3Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoampil",
"html_url": "https://github.com/enzoampil",
"followers_url": "https://api.github.com/users/enzoampil/followers",
"following_url": "https://api.github.com/users/enzoampil/following{/other_user}",
"gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions",
"organizations_url": "https://api.github.com/users/enzoampil/orgs",
"repos_url": "https://api.github.com/users/enzoampil/repos",
"events_url": "https://api.github.com/users/enzoampil/events{/privacy}",
"received_events_url": "https://api.github.com/users/enzoampil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1586?src=pr&el=h1) Report\n> Merging [#1586](https://codecov.io/gh/huggingface/transformers/pull/1586?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/82f6abd98aaa691ca0adfe21e85a17dc6f386497?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1586?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1586 +/- ##\n=======================================\n Coverage 86.16% 86.16% \n=======================================\n Files 91 91 \n Lines 13593 13593 \n=======================================\n Hits 11713 11713 \n Misses 1880 1880\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1586?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1586/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2dwdDIucHk=) | `84.19% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1586/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX29wZW5haS5weQ==) | `96.04% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1586/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGwucHk=) | `92.21% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1586/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `80.57% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1586/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RyYW5zZm9feGwucHk=) | `75.16% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1586/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2N0cmwucHk=) | `97.75% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1586/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbS5weQ==) | `88.42% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1586/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `88.17% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1586/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `95.45% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1586/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `81.75% <ø> (ø)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/1586/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1586?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1586?src=pr&el=footer). Last update [82f6abd...d36680d](https://codecov.io/gh/huggingface/transformers/pull/1586?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This is great thanks. Actually we should be adding this to all the examples for all the models...",
"@thomwolf Would be happy to make the changes for the rest of the models.",
"@thomwolf added changes for the rest of the pytorch model examples and all the tensorflow model examples\r\n\r\nI used the two bash scripts below to identify files to edit:\r\n```\r\n# Pytorch model examples\r\ngrep -iR \"input_ids = torch.tensor(tokenizer.encode(\" .\r\n\r\n# Tensorflow model examples\r\ngrep -iR \"input_ids = tf.constant(tokenizer.encode(\" .\r\n```\r\n\r\n**UPDATE:** Example documentation changes were implemented for all tensorflow models except for ```modeling_tf_distilbert.py``` since ```TFDistilBertModelTest.test_pt_tf_model_equivalence``` would fail under **build_py3_torch_and_tf** (details in error [logs](https://circleci.com/gh/huggingface/transformers/5245?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link) for commit: ec276d6abad7eae800f1a1a039ddc78fde406009)",
"Thanks for that.\r\n\r\n@LysandreJik even though we will have special tokens added by default in the coming release, maybe we still want to update the doc for the current release with this? (not sure this is possible)",
"I agree that the necessity to add special tokens should be explicit. However, the documentation is based on previous commits so changing the previous documentation would require to change the commit history of the repo (which we should not do). \r\n\r\nWe might need to think of a way to work around that to update the misleading documentation of previous versions like in this case."
] | 1,571 | 1,576 | 1,576 | CONTRIBUTOR | null | **Currently the BERT examples only show the strings encoded without the inclusion of special tokens (e.g. [CLS] and [SEP]) as illustrated below:**
```
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
sentence = "Hello there, General Kenobi!"
print(tokenizer.encode(sentence))
print(tokenizer.cls_token_id, tokenizer.sep_token_id)
# [7592, 2045, 1010, 2236, 6358, 16429, 2072, 999]
# 101 102
```
**In this pull request i set add_special_tokens=True in order to include special tokens in the documented examples as illustrated below:**
```
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
sentence = "Hello there, General Kenobi!"
print(tokenizer.encode(sentence, add_special_tokens=True))
print(tokenizer.cls_token_id, tokenizer.sep_token_id)
# [101, 7592, 2045, 1010, 2236, 6358, 16429, 2072, 999, 102]
# 101 102
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1586/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1586",
"html_url": "https://github.com/huggingface/transformers/pull/1586",
"diff_url": "https://github.com/huggingface/transformers/pull/1586.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1586.patch",
"merged_at": 1576928171000
} |
https://api.github.com/repos/huggingface/transformers/issues/1585 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1585/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1585/comments | https://api.github.com/repos/huggingface/transformers/issues/1585/events | https://github.com/huggingface/transformers/issues/1585 | 509,715,862 | MDU6SXNzdWU1MDk3MTU4NjI= | 1,585 | AdamW requires torch>=1.2.0 | {
"login": "carter54",
"id": 26741594,
"node_id": "MDQ6VXNlcjI2NzQxNTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/26741594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/carter54",
"html_url": "https://github.com/carter54",
"followers_url": "https://api.github.com/users/carter54/followers",
"following_url": "https://api.github.com/users/carter54/following{/other_user}",
"gists_url": "https://api.github.com/users/carter54/gists{/gist_id}",
"starred_url": "https://api.github.com/users/carter54/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/carter54/subscriptions",
"organizations_url": "https://api.github.com/users/carter54/orgs",
"repos_url": "https://api.github.com/users/carter54/repos",
"events_url": "https://api.github.com/users/carter54/events{/privacy}",
"received_events_url": "https://api.github.com/users/carter54/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I don't think that's right. AdamW is implemented in transformers.optimization\r\n\r\nhttps://github.com/huggingface/transformers/blob/82f6abd98aaa691ca0adfe21e85a17dc6f386497/transformers/optimization.py#L107\r\n\r\nAs far as I can see that does not require anything specific to torch 1.2. _However_, if you are trying to import [AdamW from torch ](https://pytorch.org/docs/stable/optim.html#torch.optim.AdamW), you may indeed be required to use torch 1.2.\r\n\r\nI haven't compared the implementation in torch vs. transformers, but I'd go with torch's native implementation if you can and otherwise fallback to transformers' implementation.",
"sorry, I didn't show the details:\r\nthe error is from 29 line in transformers/examples/distillation/distiller.py\r\nfrom torch.optim import AdamW\r\nthis AdamW is imported from torch.optim",
"does this really need torch >=1.2? I met this problem",
"my torch version is 1.1",
"No. Importing AdamW from transformers should work with earlier versions. If you're trying to import it directly from torch, then you'll need 1.2+.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Unstale. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,586 | 1,586 | NONE | null | ## 🐛 Bug
<!-- Important information -->
AdamW requires torch>=1.2.0, torch < 1.2.0 will cause an importError: cannot import name 'AdamW'
Model I am using (Bert, XLNet....):
Language I am using the model on (English, Chinese....):
The problem arise when using:
* [ ] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS:
* Python version:
* PyTorch version:
* PyTorch Transformers version (or branch):
* Using GPU ?
* Distributed of parallel setup ?
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1585/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1584 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1584/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1584/comments | https://api.github.com/repos/huggingface/transformers/issues/1584/events | https://github.com/huggingface/transformers/pull/1584 | 509,715,658 | MDExOlB1bGxSZXF1ZXN0MzMwMjA1NTEx | 1,584 | Add special tokens to documentation for bert examples to resolve issue: #1561 | {
"login": "enzoampil",
"id": 39557688,
"node_id": "MDQ6VXNlcjM5NTU3Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoampil",
"html_url": "https://github.com/enzoampil",
"followers_url": "https://api.github.com/users/enzoampil/followers",
"following_url": "https://api.github.com/users/enzoampil/following{/other_user}",
"gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions",
"organizations_url": "https://api.github.com/users/enzoampil/orgs",
"repos_url": "https://api.github.com/users/enzoampil/repos",
"events_url": "https://api.github.com/users/enzoampil/events{/privacy}",
"received_events_url": "https://api.github.com/users/enzoampil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1584?src=pr&el=h1) Report\n> Merging [#1584](https://codecov.io/gh/huggingface/transformers/pull/1584?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/82f6abd98aaa691ca0adfe21e85a17dc6f386497?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1584?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1584 +/- ##\n=======================================\n Coverage 86.16% 86.16% \n=======================================\n Files 91 91 \n Lines 13593 13593 \n=======================================\n Hits 11713 11713 \n Misses 1880 1880\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1584?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1584/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `88.17% <ø> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1584?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1584?src=pr&el=footer). Last update [82f6abd...1972e0e](https://codecov.io/gh/huggingface/transformers/pull/1584?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,571 | 1,571 | 1,571 | CONTRIBUTOR | null | **Currently the BERT examples only show the strings encoded without the inclusion of special tokens (e.g. [CLS] and [SEP]) as illustrated below:**
```
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
sentence = "Hello there, General Kenobi!"
print(tokenizer.encode(sentence))
print(tokenizer.cls_token_id, tokenizer.sep_token_id)
# [7592, 2045, 1010, 2236, 6358, 16429, 2072, 999]
# 101 102
```
**In this pull request i set ```add_special_tokens=True``` in order to include special tokens in the documented examples as illustrated below:**
```
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
sentence = "Hello there, General Kenobi!"
print(tokenizer.encode(sentence, add_special_tokens=True))
print(tokenizer.cls_token_id, tokenizer.sep_token_id)
# [101, 7592, 2045, 1010, 2236, 6358, 16429, 2072, 999, 102]
# 101 102
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1584/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1584",
"html_url": "https://github.com/huggingface/transformers/pull/1584",
"diff_url": "https://github.com/huggingface/transformers/pull/1584.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1584.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1583 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1583/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1583/comments | https://api.github.com/repos/huggingface/transformers/issues/1583/events | https://github.com/huggingface/transformers/issues/1583 | 509,696,138 | MDU6SXNzdWU1MDk2OTYxMzg= | 1,583 | Question answering for SQuAD with XLNet | {
"login": "kayoyin",
"id": 44864455,
"node_id": "MDQ6VXNlcjQ0ODY0NDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/44864455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kayoyin",
"html_url": "https://github.com/kayoyin",
"followers_url": "https://api.github.com/users/kayoyin/followers",
"following_url": "https://api.github.com/users/kayoyin/following{/other_user}",
"gists_url": "https://api.github.com/users/kayoyin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kayoyin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kayoyin/subscriptions",
"organizations_url": "https://api.github.com/users/kayoyin/orgs",
"repos_url": "https://api.github.com/users/kayoyin/repos",
"events_url": "https://api.github.com/users/kayoyin/events{/privacy}",
"received_events_url": "https://api.github.com/users/kayoyin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | ## ❓ Questions & Help
Dear huggingface,
Thank you very much for your great implementation of NLP architectures! I'm currently trying to train an XLNet model for question answering in French.
I studied your code to understand how question answering is done with XLNet, but I am struggling to follow how it works. Especially, I would like to understand the reasoning behind `PoolerStartLogits`, `PoolerStartLogits` and `PoolerAnswerClass`.
I also don't quite understand how prediction of the answer indices works during inference time.
I know this is a lot of questions, I appreciate any help you can give me!
Thank you very much!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1583/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1583/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1582 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1582/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1582/comments | https://api.github.com/repos/huggingface/transformers/issues/1582/events | https://github.com/huggingface/transformers/issues/1582 | 509,681,078 | MDU6SXNzdWU1MDk2ODEwNzg= | 1,582 | How does arg --vocab_transform help in extract_distilbert.py? | {
"login": "evehsu",
"id": 17281640,
"node_id": "MDQ6VXNlcjE3MjgxNjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/17281640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/evehsu",
"html_url": "https://github.com/evehsu",
"followers_url": "https://api.github.com/users/evehsu/followers",
"following_url": "https://api.github.com/users/evehsu/following{/other_user}",
"gists_url": "https://api.github.com/users/evehsu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/evehsu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/evehsu/subscriptions",
"organizations_url": "https://api.github.com/users/evehsu/orgs",
"repos_url": "https://api.github.com/users/evehsu/repos",
"events_url": "https://api.github.com/users/evehsu/events{/privacy}",
"received_events_url": "https://api.github.com/users/evehsu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello @evehsu,\r\n\r\nBERT uses an additional non-linearity before the vocabulary projection (see [here](https://github.com/huggingface/transformers/blob/master/transformers/modeling_bert.py#L381)).\r\nIt's a design choice, as far as I know, XLM doesn't a non-linearity right before the vocab projection (the language modeling head).\r\n\r\nI left this option because I experimented with it, but if you want to keep the BERT architecture as unchanged as possible, you should use the `--vocab_transform` to ensure you also extract the pre-trained weights for this non-linearity.\r\n\r\nVictor",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi everyone, I'm new to experiment with bert model distillation. When running extract_distilbert.py on my fine tuned bert model, I came across the argusment vocat_transform.
```
if args.vocab_transform:
for w in ['weight', 'bias']:
compressed_sd[f'vocab_transform.{w}'] = state_dict[f'cls.predictions.transform.dense.{w}']
compressed_sd[f'vocab_layer_norm.{w}'] = state_dict[f'cls.predictions.transform.LayerNorm.{w}']
```
When should we use this argument in running extract_distilbert? Any scenario we could benefit doing so?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1582/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1581 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1581/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1581/comments | https://api.github.com/repos/huggingface/transformers/issues/1581/events | https://github.com/huggingface/transformers/issues/1581 | 509,666,574 | MDU6SXNzdWU1MDk2NjY1NzQ= | 1,581 | Is there a computation/speed advantage to batching inputs into `TransformerModel` to reduce its number calls | {
"login": "Santosh-Gupta",
"id": 5524261,
"node_id": "MDQ6VXNlcjU1MjQyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Santosh-Gupta",
"html_url": "https://github.com/Santosh-Gupta",
"followers_url": "https://api.github.com/users/Santosh-Gupta/followers",
"following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions",
"organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs",
"repos_url": "https://api.github.com/users/Santosh-Gupta/repos",
"events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | CONTRIBUTOR | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
For my particular application, I need to have several `output = TransformerModel(inputIDs)` calls per step, from different datasets.
so
```
output1 = TransformerModel(inputIDs_dataset1)
output2 = TransformerModel(inputIDs_dataset2)
output3 = TransformerModel(inputIDs_dataset3)
```
Initially I preferred to have these calls separate, as each dataset has a different average and distribution of sequence length, so keeping these separated would decrease the number of paddings I need to do within each batch.
On the other hand, I imagine that the TransformerModel objects have some optimizations which would make it overall more computationally efficient just to concatenate all the datasets, and make only one call to the `TransformerModel`.
My intuition is towards the latter approach, but would hear takes from those who designed it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1581/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1580 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1580/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1580/comments | https://api.github.com/repos/huggingface/transformers/issues/1580/events | https://github.com/huggingface/transformers/pull/1580 | 509,656,316 | MDExOlB1bGxSZXF1ZXN0MzMwMTYwODcz | 1,580 | Gradient norm clipping should be done right before calling the optimiser | {
"login": "pminervini",
"id": 227357,
"node_id": "MDQ6VXNlcjIyNzM1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pminervini",
"html_url": "https://github.com/pminervini",
"followers_url": "https://api.github.com/users/pminervini/followers",
"following_url": "https://api.github.com/users/pminervini/following{/other_user}",
"gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pminervini/subscriptions",
"organizations_url": "https://api.github.com/users/pminervini/orgs",
"repos_url": "https://api.github.com/users/pminervini/repos",
"events_url": "https://api.github.com/users/pminervini/events{/privacy}",
"received_events_url": "https://api.github.com/users/pminervini/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1580?src=pr&el=h1) Report\n> Merging [#1580](https://codecov.io/gh/huggingface/transformers/pull/1580?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/82f6abd98aaa691ca0adfe21e85a17dc6f386497?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1580?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1580 +/- ##\n=======================================\n Coverage 86.16% 86.16% \n=======================================\n Files 91 91 \n Lines 13593 13593 \n=======================================\n Hits 11713 11713 \n Misses 1880 1880\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1580?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1580?src=pr&el=footer). Last update [82f6abd...abd7110](https://codecov.io/gh/huggingface/transformers/pull/1580?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Oh yes, great, thanks, Pasquale. Would you mind fixing the `run_glue` and `run_ner` examples as well?",
"@thomwolf done! what's the best way to check this code before merging?",
"Thanks a lot!\r\nIt should be fine, we have continuous integration tests on `run_glue` and `run_squad` so if it passed at least the code run."
] | 1,571 | 1,571 | 1,571 | CONTRIBUTOR | null | Right now it's done after each step in the gradient accumulation. What do you think? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1580/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1580",
"html_url": "https://github.com/huggingface/transformers/pull/1580",
"diff_url": "https://github.com/huggingface/transformers/pull/1580.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1580.patch",
"merged_at": 1571745560000
} |
https://api.github.com/repos/huggingface/transformers/issues/1579 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1579/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1579/comments | https://api.github.com/repos/huggingface/transformers/issues/1579/events | https://github.com/huggingface/transformers/issues/1579 | 509,626,406 | MDU6SXNzdWU1MDk2MjY0MDY= | 1,579 | seq2seq with gpt2 | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Merging with #1506"
] | 1,571 | 1,571 | 1,571 | NONE | null | Hi,
I want to have a seq2seq model from gpt2, if I change the script of "run_lm_finetuning.py" in a way that it gets a sequence, then make it a context ids, and let it generate another sequence, like "run_generation.py" code, then minimize the cross-entropy loss, does it this way, create a seq2seq model? I rgreatly appreciate your help. thanks a lot. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1579/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1578 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1578/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1578/comments | https://api.github.com/repos/huggingface/transformers/issues/1578/events | https://github.com/huggingface/transformers/issues/1578 | 509,611,258 | MDU6SXNzdWU1MDk2MTEyNTg= | 1,578 | distilled gpt2 to be added to run_generation and run_lm_fintuning | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, DistilGPT-2 is considered to be a checkpoint of GPT-2 in our library (differently to DistilBERT). You can already use DistilGPT-2 for both of these scripts with the following:\r\n```bash\r\npython run_generation --model_type=gpt2 --model_name_or_path=distilgpt2\r\n```"
] | 1,571 | 1,571 | 1,571 | NONE | null | Hi
I greatly appreciated also adding distilled GPT2 to the codes above, thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1578/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1577 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1577/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1577/comments | https://api.github.com/repos/huggingface/transformers/issues/1577/events | https://github.com/huggingface/transformers/pull/1577 | 509,610,055 | MDExOlB1bGxSZXF1ZXN0MzMwMTI3NzY0 | 1,577 | Add feature #1572 which gives support for multiple candidate sequences | {
"login": "enzoampil",
"id": 39557688,
"node_id": "MDQ6VXNlcjM5NTU3Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoampil",
"html_url": "https://github.com/enzoampil",
"followers_url": "https://api.github.com/users/enzoampil/followers",
"following_url": "https://api.github.com/users/enzoampil/following{/other_user}",
"gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions",
"organizations_url": "https://api.github.com/users/enzoampil/orgs",
"repos_url": "https://api.github.com/users/enzoampil/repos",
"events_url": "https://api.github.com/users/enzoampil/events{/privacy}",
"received_events_url": "https://api.github.com/users/enzoampil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please note that main() now returns a list with ```num_samples``` elements inside.\r\n\r\nBecause of this, the test for run_generation.py should be updated to test for ```length``` for each element within the list. This explains why **build_py3_torch** test failed.\r\n\r\nI will update ```ExamplesTests.test_generation``` to reflect the new output format.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1577?src=pr&el=h1) Report\n> Merging [#1577](https://codecov.io/gh/huggingface/transformers/pull/1577?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ef1b8b2ae5ad1057154a126879f7eb8de685f862?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1577?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1577 +/- ##\n=======================================\n Coverage 86.17% 86.17% \n=======================================\n Files 91 91 \n Lines 13595 13595 \n=======================================\n Hits 11715 11715 \n Misses 1880 1880\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1577?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1577?src=pr&el=footer). Last update [ef1b8b2...17dd64e](https://codecov.io/gh/huggingface/transformers/pull/1577?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Latest commit 17dd64e now applies repetition penalty and ```top_k_top_p_filtering``` to each candidate sequence separately.",
"Thanks @enzoampil \r\n\r\nSuperseded by https://github.com/huggingface/transformers/pull/1333 which was just merged to master.\r\n\r\nLet me know if this works for your use case."
] | 1,571 | 1,572 | 1,572 | CONTRIBUTOR | null | **Multiple candidate sequences can be generated by setting ```num_samples > 1``` (still 1 by default).**
EXAMPLE with ```num_samples == 2``` for a GPT2 model:
```
INPUT:
Why did the chicken
OUTPUT:
cross the road <eoq> To go to the other side. <eoa>
eat food <eoq> Because it was hungry <eoa>
```
(above is illustrative w some words changed from the actual output)
**UPDATE:** Multiple candidate sequences can now be generated with _repetition penalty_ and ```top_k_top_p_filtering``` applied separately to each candidate. This allows for independent probability distributions across candidate sequences.
~Samples are generated with replacement to allow for sequences that have similar tokens at the same index (e.g. [CLS], stopwords, punctuations).~
~When ```temperature == 0```, the tokens returned are the top ```num_samples``` logits (first sample gets top 1, second gets top 2, and so on). I realize this might not be the best implementation because it doesn't allow for similar tokens at the same index across samples. I will consider later changing this to just returning ```num_samples``` copies of the top 1 logits (argmax).~ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1577/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1577",
"html_url": "https://github.com/huggingface/transformers/pull/1577",
"diff_url": "https://github.com/huggingface/transformers/pull/1577.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1577.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1576 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1576/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1576/comments | https://api.github.com/repos/huggingface/transformers/issues/1576/events | https://github.com/huggingface/transformers/issues/1576 | 509,596,089 | MDU6SXNzdWU1MDk1OTYwODk= | 1,576 | evaluating on race dataset with checkpoints fine tuned on roberta with fairseq | {
"login": "qshi95",
"id": 23690677,
"node_id": "MDQ6VXNlcjIzNjkwNjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/23690677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qshi95",
"html_url": "https://github.com/qshi95",
"followers_url": "https://api.github.com/users/qshi95/followers",
"following_url": "https://api.github.com/users/qshi95/following{/other_user}",
"gists_url": "https://api.github.com/users/qshi95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qshi95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qshi95/subscriptions",
"organizations_url": "https://api.github.com/users/qshi95/orgs",
"repos_url": "https://api.github.com/users/qshi95/repos",
"events_url": "https://api.github.com/users/qshi95/events{/privacy}",
"received_events_url": "https://api.github.com/users/qshi95/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Do you get any improvement? \r\nThe ACC of eval and test has a huge gap.\r\n",
" --classification-head when converting the models ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I am unable to reproduce the results on the RACE dataset. If anyone has been able to reproduce it, could you kindly share the weights of the fine-tuned model ?"
] | 1,571 | 1,582 | 1,580 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I fine tuned a model on race dataset with reberta, following the fairseq instruction, got the result:
| epoch 004 | valid on 'valid' subset: | loss 0.913 | nll_loss 0.003 | ppl 1.00 | num_updates 21849 | best_accuracy 0.846563 | accuracy 0.836129
| epoch 004 | valid on 'valid' subset: | loss 0.913 | nll_loss 0.003 | ppl 1.00 | num_updates 21849 | best_accuracy 0.846563 | accuracy 0.836129
| epoch 004 | valid on 'valid' subset: | loss 0.913 | nll_loss 0.003 | ppl 1.00 | num_updates 21849 | best_accuracy 0.846563 | accuracy 0.836129
| epoch 004 | valid on 'valid' subset: | loss 0.913 | nll_loss 0.003 | ppl 1.00 | num_updates 21849 | best_accuracy 0.846563 | accuracy 0.836129
| saved checkpoint checkpoints/checkpoint4.pt (epoch 4 @ 21849 updates) (writing took 145.8246190547943 seconds)
| done training in 76377.9 seconds
But I load the weight to the transformers with convert_roberta_original_pytorch_checkpoint_to_pytorch script:
python convert_roberta_original_pytorch_checkpoint_to_pytorch.py --roberta_checkpoint_path ../pytorch-transformers-master/data/roberta-best-checkpoint/ --pytorch_dump_folder_path ../pytorch-transformers-master/data/roberta-best-checkpoint/
then evaluate on RACE dataset, I got terrible results on dev set:
model =data/models_roberta_race/
total batch size=8
train num epochs=5
fp16 =False
max seq length =512
eval_acc = 0.4808676079394311
eval_loss = 1.352066347319484
and on test set:
model =data/models_roberta_race/
total batch size=8
train num epochs=5
fp16 =False
max seq length =512
eval_acc = 0.6015403323875153
eval_loss = 1.3087183478393092
I don't know why. Could anyone can help? Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1576/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1576/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1575 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1575/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1575/comments | https://api.github.com/repos/huggingface/transformers/issues/1575/events | https://github.com/huggingface/transformers/issues/1575 | 509,586,024 | MDU6SXNzdWU1MDk1ODYwMjQ= | 1,575 | use gpt2 as a seq2seq model | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Merging with #1506"
] | 1,571 | 1,571 | 1,571 | NONE | null | Hi
could you assist me please and show me with example on how I can use GPT-2 language model decoding method so train seq2seq model? thanks a lot | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1575/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1574 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1574/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1574/comments | https://api.github.com/repos/huggingface/transformers/issues/1574/events | https://github.com/huggingface/transformers/issues/1574 | 509,560,147 | MDU6SXNzdWU1MDk1NjAxNDc= | 1,574 | Why the output is same within a batch use BertForSequenceClassification? | {
"login": "201101050424",
"id": 6504096,
"node_id": "MDQ6VXNlcjY1MDQwOTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6504096?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/201101050424",
"html_url": "https://github.com/201101050424",
"followers_url": "https://api.github.com/users/201101050424/followers",
"following_url": "https://api.github.com/users/201101050424/following{/other_user}",
"gists_url": "https://api.github.com/users/201101050424/gists{/gist_id}",
"starred_url": "https://api.github.com/users/201101050424/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/201101050424/subscriptions",
"organizations_url": "https://api.github.com/users/201101050424/orgs",
"repos_url": "https://api.github.com/users/201101050424/repos",
"events_url": "https://api.github.com/users/201101050424/events{/privacy}",
"received_events_url": "https://api.github.com/users/201101050424/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello, could you provide a script so that we may better understand the problem here?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | ## ❓ Questions & Help
I use BertForSequenceClassification for classification task, but the output witin a batch became same after just 2 or 3 batch, the value between different batch is different, really strange.
batch 1 output:
[-0.5966, 0.6081],
[-0.4659, 0.3766],
[-0.3595, 0.1334],
[-0.4178, 0.6873],
[-0.3884, 0.2640],
[-0.5017, 0.3465],
[-0.5978, 0.4961],
[-0.3146, 0.6879],
[-0.6525, 0.2702],
[-0.2500, 0.1232],
[-0.3137, 0.4212],
[-0.2663, 0.5169],
[-0.5225, 0.7992],
[-0.4844, 0.1942],
[-0.1459, 0.4033],
[-0.9007, 0.5122],
[-0.5833, 0.8187],
[-0.5552, 0.1253],
[-0.5420, -0.1123]], device='cuda:0', grad_fn=<AddmmBackward>))
2:
(tensor(1.2256, device='cuda:0', grad_fn=<NllLossBackward>), tensor([[ 0.7105, -0.8978],
[ 0.7925, -0.9382],
[ 0.6098, -0.9100],
[ 0.7522, -0.9534],
[ 0.7706, -0.9142],
[ 0.7778, -0.9246],
[ 0.7703, -0.8327],
[ 0.5850, -0.8817],
[ 0.6266, -0.9271],
[ 0.8061, -0.8157],
[ 0.8036, -0.9927],
[ 0.7619, -0.9277],
[ 0.7773, -0.7931],
[ 0.8458, -0.8186],
[ 0.6291, -0.8925],
[ 0.5919, -0.8709],
[ 0.6222, -0.9173],
[ 0.8290, -0.9817],
[ 0.7155, -0.9171],
[ 0.8107, -0.9364]], device='cuda:0', grad_fn=<AddmmBackward>))
3
(tensor(0.7688, device='cuda:0', grad_fn=<NllLossBackward>), tensor([[-0.7892, 0.5464],
[-0.7873, 0.5431],
[-0.7914, 0.5424],
[-0.7938, 0.5448],
[-0.7934, 0.5449],
[-0.7876, 0.5430],
[-0.7973, 0.5446],
[-0.7905, 0.5430],
[-0.7924, 0.5451],
[-0.7900, 0.5438],
[-0.7879, 0.5449],
[-0.7869, 0.5408],
[-0.7924, 0.5458],
[-0.7928, 0.5436],
[-0.7954, 0.5469],
[-0.7900, 0.5429],
[-0.7945, 0.5453],
[-0.8027, 0.5492],
[-0.7937, 0.5437],
[-0.7934, 0.5506]], device='cuda:0', grad_fn=<AddmmBackward>))
(tensor(1.1733, device='cuda:0', grad_fn=<NllLossBackward>), tensor([[ 1.3647, -0.3074],
[ 1.3588, -0.2927],
[ 1.3581, -0.2915],
[ 1.3628, -0.3009],
[ 1.3625, -0.3001],
[ 1.3630, -0.3016],
[ 1.3666, -0.3157],
[ 1.3604, -0.2953],
[ 1.3655, -0.3108],
[ 1.3604, -0.2942],
[ 1.3623, -0.3041],
[ 1.3555, -0.2866],
[ 1.3600, -0.2943],
[ 1.3654, -0.3091],
[ 1.3628, -0.3004],
[ 1.3658, -0.3080],
[ 1.3643, -0.3041],
[ 1.3599, -0.2967],
[ 1.3629, -0.3024],
[ 1.3688, -0.3206]], device='cuda:0', grad_fn=<AddmmBackward>))
<!-- A clear and concise description of the question. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1574/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1573 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1573/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1573/comments | https://api.github.com/repos/huggingface/transformers/issues/1573/events | https://github.com/huggingface/transformers/issues/1573 | 509,548,923 | MDU6SXNzdWU1MDk1NDg5MjM= | 1,573 | GPT2 attention mask and output masking | {
"login": "anandhperumal",
"id": 12907396,
"node_id": "MDQ6VXNlcjEyOTA3Mzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/12907396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anandhperumal",
"html_url": "https://github.com/anandhperumal",
"followers_url": "https://api.github.com/users/anandhperumal/followers",
"following_url": "https://api.github.com/users/anandhperumal/following{/other_user}",
"gists_url": "https://api.github.com/users/anandhperumal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anandhperumal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anandhperumal/subscriptions",
"organizations_url": "https://api.github.com/users/anandhperumal/orgs",
"repos_url": "https://api.github.com/users/anandhperumal/repos",
"events_url": "https://api.github.com/users/anandhperumal/events{/privacy}",
"received_events_url": "https://api.github.com/users/anandhperumal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing this because I found my answer",
"Hi, would you mind sharing what the answer you found is? Thank you so much!",
"Sorry for the delay. Gpt2 was trained as a CLM model with a fixed block size of data. So there was no need for attention mask. (That is what I understood). "
] | 1,571 | 1,576 | 1,571 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I have couple of questions:
1. In the original gpt2 they didn't pad the sequence, so they didn't need a attention mask, but in other cases where we our input sequence is small and we pad the input, don't we need a attention mask?
2. I have padded the labels in the left with -1, in the cost function how do I skip the padded elements in labels? and same for logits how do I skip the padded elements ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1573/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1572 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1572/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1572/comments | https://api.github.com/repos/huggingface/transformers/issues/1572/events | https://github.com/huggingface/transformers/issues/1572 | 509,545,595 | MDU6SXNzdWU1MDk1NDU1OTU= | 1,572 | Can we generate multiple possible sentences using GPT? | {
"login": "zhaoxy92",
"id": 21225257,
"node_id": "MDQ6VXNlcjIxMjI1MjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/21225257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaoxy92",
"html_url": "https://github.com/zhaoxy92",
"followers_url": "https://api.github.com/users/zhaoxy92/followers",
"following_url": "https://api.github.com/users/zhaoxy92/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaoxy92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaoxy92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaoxy92/subscriptions",
"organizations_url": "https://api.github.com/users/zhaoxy92/orgs",
"repos_url": "https://api.github.com/users/zhaoxy92/repos",
"events_url": "https://api.github.com/users/zhaoxy92/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaoxy92/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@zhaoxy92 I happen to have a use case for this as well. I'll add in this feature to the ```run_generation.py```",
"@zhaoxy92 Added this functionality in ```run_generation.py```. You can set the number of candidates generated by setting the argument ```num_samples``` which is set to 1 by default.",
"I think you need to change `top_k_top_p_filtering()` as well.",
"@s-js not sure why we'd have to change```top_k_top_p_filtering()``` since the sampling only happens at ```sample_sequence()```. ```top_k_top_p_filtering()``` only filters the logits, so we can still generate multiple candidate sequences (independent of the filtered distribution).",
"@enzoampil Sorry, I meant repetition penalty (https://github.com/enzoampil/transformers/blob/7facbbe9871fe458b530ae8ce1b4bfefabd47c74/examples/run_generation.py#L142). Each sample has a different set of seen tokens.\r\nAt first I thought you were doing it inside `top_k_top_p_filtering()`.",
"Hi, thanks very much for adding this functionality – I'm trying to implement this into my own notebook and hitting a tensor mismatch error I can't figure out. I hope this is the right forum to post this question, since I'm using the new functionality you created.\r\n\r\nAt line 150: `generated = torch.cat((generated, next_token.unsqueeze(0)), dim=1)`\r\n\r\nI'm getting this error:\r\n```RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 1 and 3 in dimension 0 at /opt/conda/conda-bld/pytorch_1556653114079/work/aten/src/THC/generic/THCTensorMath.cu:71```\r\n\r\nThe debugger shows me the sizes:\r\ngenerated = (3,37)\r\nnext_token.unsqueeze(0) = (1,3)\r\n\r\nSo I figure that next_token tensor shape ought to be (3,1) instead, so I tried changing the line to `next_token.unsqueeze(1)` instead. When I do that I get a `CUDA error: device-side assert triggered`. Did that change fix my problem or just cause a new one?\r\n\r\nAny ideas are greatly appreciated, thank you!",
"hi @buttchurch , did you run ```run_generation.py``` (with the multiple sentence functionality) as a CLI? It should work if you run it from my [fork](https://github.com/enzoampil/transformers/blob/7facbbe9871fe458b530ae8ce1b4bfefabd47c74/examples/run_generation.py#L142). Can you please post here the exact script you ran and the complete error message. \r\n\r\nAlso, my pull [request](https://github.com/enzoampil/transformers/blob/7facbbe9871fe458b530ae8ce1b4bfefabd47c74/examples/run_generation.py#L142) shows ```generated = torch.cat((generated, next_token.unsqueeze(1)), dim=1)``` in the last line of ```sample_sequence``` so I'm not sure where you got that line 150 code snippet.",
"@s-js noted on repetition penalty support. I'll try to find time for this within the next week.",
"hi @enzoampil, thanks for such a quick response! I still don't understand navigating git forks and branches and the different versions of git projects very well, so I have been just going off the main code I find in the transformers github.\r\n\r\nIt's probably not the 'right' way to do it, but I've pulled my own jupyter notebook together from a couple of the transformer example.py files, rather than using the run_generation.py. I think it might be way too long to post here, but I will now try implementing the changes in your fork to my notebook. I'll report back – thanks again for your help, and for creating this new functionality :)\r\n\r\nEdit: It works! Seems like the important bit I was missing was `replacement=True` on the previous line.",
"@buttchurch glad it works for you :) Very welcome!",
"@s-js Latest [commit](https://github.com/huggingface/transformers/commit/17dd64ed939e09c1c9b1fa666390dd69a4731387) now implements _repetition penalty_ and ```top_k_top_p_filtering``` separately per candidate sequence generated.",
"We just merged https://github.com/huggingface/transformers/pull/1333 to master (+ subsequent fixes), can you check that it does what you guys want?\r\n\r\nI'll close the issue for now, re-open if needed."
] | 1,571 | 1,572 | 1,572 | NONE | null | Hi,
Is there any way to generate multiple candidate text sequences using the pretrained generators? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1572/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1571 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1571/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1571/comments | https://api.github.com/repos/huggingface/transformers/issues/1571/events | https://github.com/huggingface/transformers/issues/1571 | 509,516,014 | MDU6SXNzdWU1MDk1MTYwMTQ= | 1,571 | Pytorch Transformers no longer loads SciBert weights, getting `UnicodeDecodeError`. Worked in pytorch_pretrained_bert | {
"login": "Santosh-Gupta",
"id": 5524261,
"node_id": "MDQ6VXNlcjU1MjQyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Santosh-Gupta",
"html_url": "https://github.com/Santosh-Gupta",
"followers_url": "https://api.github.com/users/Santosh-Gupta/followers",
"following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions",
"organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs",
"repos_url": "https://api.github.com/users/Santosh-Gupta/repos",
"events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`from_pretrained` expects the following files: `vocab.txt`, `config.json` and `pytorch_model.bin`. \r\n\r\nThus, you only need to extract the `weights.tar.gz` archive. \r\n\r\nThen rename `bert_config.json` to `config.json` and pass the path name to the `from_pretrained` method: this should be `/content/scibert_scivocab_uncased` in your example :)"
] | 1,571 | 1,571 | 1,571 | CONTRIBUTOR | null | ## 🐛 Bug
<!-- Important information -->
When using the old pytorch_pretrained_bert library, I could point the model with `from_pretrained` to the SciBert weights.tar.gz file, and it would load this just. However, if I try this with the Pytorch Transformers, I get this error.
```
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
```
Model I am using: Bert
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ X] my own modified scripts: (give details)
I have a colab notebook that loads the SciBert weights using the old pytorch_pretrained_bert library, and the new Transformers library.
## To Reproduce
Steps to reproduce the behavior:
Here is the code
```
import requests
import os
import tarfile
import zipfile
import multiprocess
import json
if not os.path.exists('TempDir'):
os.makedirs('TempDir')
#Download SciBert weights and vocab file
import urllib.request
# Download the file from `url` and save it locally under `file_name`:
urllib.request.urlretrieve('https://s3-us-west-2.amazonaws.com/ai2-s2-research/scibert/pytorch_models/scibert_scivocab_uncased.tar', 'scibert.tar')
#Untar weights
import tarfile
tar = tarfile.open('scibert.tar', "r:")
tar.extractall()
tar.close()
#Extract weights
tar = tarfile.open('scibert_scivocab_uncased/weights.tar.gz', "r:gz")
tar.extractall('scibert_scivocab_uncased')
tar.close()
os.listdir('scibert_scivocab_uncased')
!pip install pytorch-pretrained-bert
from pytorch_pretrained_bert import BertModel as OldBertModel
#Works
oldBert = OldBertModel.from_pretrained('/content/scibert_scivocab_uncased/weights.tar.gz', cache_dir= 'TempDir')
!pip install transformers
from transformers import BertModel as NewBertModel
#Doesn't work
newBert = NewBertModel.from_pretrained('/content/scibert_scivocab_uncased/weights.tar.gz', cache_dir= 'TempDir')
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
Here is the error
```
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-14-7e88a8c51c18> in <module>()
----> 1 newBert = NewBertModel.from_pretrained('/content/scibert_scivocab_uncased/weights.tar.gz', cache_dir= 'TempDir')
3 frames
/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
285 cache_dir=cache_dir, return_unused_kwargs=True,
286 force_download=force_download,
--> 287 **kwargs
288 )
289 else:
/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
152
153 # Load config
--> 154 config = cls.from_json_file(resolved_config_file)
155
156 if hasattr(config, 'pruned_heads'):
/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in from_json_file(cls, json_file)
184 """Constructs a `BertConfig` from a json file of parameters."""
185 with open(json_file, "r", encoding='utf-8') as reader:
--> 186 text = reader.read()
187 return cls.from_dict(json.loads(text))
188
/usr/lib/python3.6/codecs.py in decode(self, input, final)
319 # decode input (taking the buffer into account)
320 data = self.buffer + input
--> 321 (result, consumed) = self._buffer_decode(data, self.errors, final)
322 # keep undecoded input until the next call
323 self.buffer = data[consumed:]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
```
For convenience, here is a colab notebook with the code that you can run
https://colab.research.google.com/drive/1xzYYM1_Vo4wRMicBfnfzfAg_47SitwQi
## Expected behavior
pretrained weights should load just fine.
## Environment
* OS: Google Colab
* Python version:
* PyTorch version:
* PyTorch Transformers version (or branch): Current
* Using GPU ?
* Distributed of parallel setup ?
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1571/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1570 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1570/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1570/comments | https://api.github.com/repos/huggingface/transformers/issues/1570/events | https://github.com/huggingface/transformers/pull/1570 | 509,512,914 | MDExOlB1bGxSZXF1ZXN0MzMwMDYwNDk5 | 1,570 | Fix Roberta on TPU | {
"login": "rickysaurav",
"id": 13986039,
"node_id": "MDQ6VXNlcjEzOTg2MDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/13986039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rickysaurav",
"html_url": "https://github.com/rickysaurav",
"followers_url": "https://api.github.com/users/rickysaurav/followers",
"following_url": "https://api.github.com/users/rickysaurav/following{/other_user}",
"gists_url": "https://api.github.com/users/rickysaurav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rickysaurav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rickysaurav/subscriptions",
"organizations_url": "https://api.github.com/users/rickysaurav/orgs",
"repos_url": "https://api.github.com/users/rickysaurav/repos",
"events_url": "https://api.github.com/users/rickysaurav/events{/privacy}",
"received_events_url": "https://api.github.com/users/rickysaurav/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1570?src=pr&el=h1) Report\n> Merging [#1570](https://codecov.io/gh/huggingface/transformers/pull/1570?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/82f6abd98aaa691ca0adfe21e85a17dc6f386497?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1570?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1570 +/- ##\n=======================================\n Coverage 86.16% 86.16% \n=======================================\n Files 91 91 \n Lines 13593 13593 \n=======================================\n Hits 11713 11713 \n Misses 1880 1880\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1570?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1570/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3JvYmVydGEucHk=) | `100% <100%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1570?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1570?src=pr&el=footer). Last update [82f6abd...55c3ae1](https://codecov.io/gh/huggingface/transformers/pull/1570?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hum this situation is a bit annoying because we switched from `logger.error` to `tf.print` to solve #1350",
"Is there a specific reason why we have such a warning message for Roberta but not for other models? All models based on BERT are require the special tokens.\r\nI was having the same issues as #1350 on my end using the logger (lots of zmq , operationnotallowed errors) . The solution for me was to remove the entire warning message altogether.\r\nIs that viable in this scenario? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | Fixes #1569
- Revert tf.print() to logger , since tf.print() is an unsupported TPU ops. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1570/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1570",
"html_url": "https://github.com/huggingface/transformers/pull/1570",
"diff_url": "https://github.com/huggingface/transformers/pull/1570.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1570.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1569 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1569/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1569/comments | https://api.github.com/repos/huggingface/transformers/issues/1569/events | https://github.com/huggingface/transformers/issues/1569 | 509,512,157 | MDU6SXNzdWU1MDk1MTIxNTc= | 1,569 | TFRobertaForSequenceClassification fails on TPU on Transformers >2.0.0 | {
"login": "rickysaurav",
"id": 13986039,
"node_id": "MDQ6VXNlcjEzOTg2MDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/13986039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rickysaurav",
"html_url": "https://github.com/rickysaurav",
"followers_url": "https://api.github.com/users/rickysaurav/followers",
"following_url": "https://api.github.com/users/rickysaurav/following{/other_user}",
"gists_url": "https://api.github.com/users/rickysaurav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rickysaurav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rickysaurav/subscriptions",
"organizations_url": "https://api.github.com/users/rickysaurav/orgs",
"repos_url": "https://api.github.com/users/rickysaurav/repos",
"events_url": "https://api.github.com/users/rickysaurav/events{/privacy}",
"received_events_url": "https://api.github.com/users/rickysaurav/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (TFRobertaForSequenceClassification):
Language I am using the model on (English):
The problem arise when using:
* [ ] the official example scripts: (give details)
* [x] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1. Use a TPU runtime on colab
2. ```python
resolver = tf.distribute.cluster_resolver.TPUClusterResolver()
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
with tf.device('/job:worker'):
with strategy.scope():
# model = TFRobertaForSequenceClassification.from_pretrained('bert-large-uncased-whole-word-masking',num_labels = (len(le.classes_)))
model = TFRobertaForSequenceClassification.from_pretrained('roberta-large',num_labels = 2)
print('model loaded')
inp = np.random.randint(10,100, size=(12800, 64))
inp[:,0]=0
inp[:,63]=2
labels = np.random.randint(2,size = (12800,1))
print('data generated')
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
print('starting fitting')
# model.fit([train_input_ids,train_input_masks],y_train,epochs = 3,batch_size = 64,validation_data=([test_input_ids,test_input_masks], y_test),verbose=1)
model.fit(inp,labels,epochs = 2,batch_size = 64,verbose=1)
## Environment
* OS: Google Colab TPU runtime
* Python version:3.6
* PyTorch version:NA
* PyTorch Transformers version (or branch):2.1.1
* Using GPU ? No
* Distributed of parallel setup ? Yes
## Additional context
The following error gets thrown when calling model.fit()
```
InvalidArgumentError Traceback (most recent call last)
<ipython-input-4-b77065bb89ae> in <module>()
15 print('starting fitting')
16 # model.fit([train_input_ids,train_input_masks],y_train,epochs = 3,batch_size = 64,validation_data=([test_input_ids,test_input_masks], y_test),verbose=1)
---> 17 model.fit(inp,labels,epochs = 2,batch_size = 64,verbose=1)
11 frames
/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value)
InvalidArgumentError: Compilation failure: Detected unsupported operations when trying to compile graph tf_roberta_for_sequence_classification_roberta_cond_true_122339[] on XLA_TPU_JIT: PrintV2 (No registered 'PrintV2' OpKernel for XLA_TPU_JIT devices compatible with node {{node PrintV2}}
. Registered: device='CPU'
){{node PrintV2}}
[[tf_roberta_for_sequence_classification/roberta/cond]]
TPU compilation failed
[[tpu_compile_succeeded_assert/_5504150486074133972/_3]]
Additional GRPC error information:
{"created":"@1571518085.015232162","description":"Error received from peer","file":"external/grpc/src/core/lib/surface/call.cc","file_line":1039,"grpc_message":" Compilation failure: Detected unsupported operations when trying to compile graph tf_roberta_for_sequence_classification_roberta_cond_true_122339[] on XLA_TPU_JIT: PrintV2 (No registered 'PrintV2' OpKernel for XLA_TPU_JIT devices compatible with node {{node PrintV2}}\n\t. Registered: device='CPU'\n){{node PrintV2}}\n\t [[tf_roberta_for_sequence_classification/roberta/cond]]\n\tTPU compilation failed\n\t [[tpu_compile_succeeded_assert/_5504150486074133972/_3]]","grpc_status":3} [Op:__inference_distributed_function_154989]
Function call stack:
distributed_function -> distributed_function
```
The reason behind this error seems to be the tf.print() in the following code , which is not supported on TPU.
https://github.com/huggingface/transformers/blob/82f6abd98aaa691ca0adfe21e85a17dc6f386497/transformers/modeling_tf_roberta.py#L78-L80
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1569/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1568 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1568/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1568/comments | https://api.github.com/repos/huggingface/transformers/issues/1568/events | https://github.com/huggingface/transformers/pull/1568 | 509,504,935 | MDExOlB1bGxSZXF1ZXN0MzMwMDU0NDkw | 1,568 | Fix hanging when loading pretrained models | {
"login": "daemon",
"id": 6188572,
"node_id": "MDQ6VXNlcjYxODg1NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6188572?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daemon",
"html_url": "https://github.com/daemon",
"followers_url": "https://api.github.com/users/daemon/followers",
"following_url": "https://api.github.com/users/daemon/following{/other_user}",
"gists_url": "https://api.github.com/users/daemon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daemon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daemon/subscriptions",
"organizations_url": "https://api.github.com/users/daemon/orgs",
"repos_url": "https://api.github.com/users/daemon/repos",
"events_url": "https://api.github.com/users/daemon/events{/privacy}",
"received_events_url": "https://api.github.com/users/daemon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1568?src=pr&el=h1) Report\n> Merging [#1568](https://codecov.io/gh/huggingface/transformers/pull/1568?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/82f6abd98aaa691ca0adfe21e85a17dc6f386497?src=pr&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1568?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1568 +/- ##\n==========================================\n- Coverage 86.16% 86.14% -0.03% \n==========================================\n Files 91 91 \n Lines 13593 13593 \n==========================================\n- Hits 11713 11710 -3 \n- Misses 1880 1883 +3\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1568?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1568/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2ZpbGVfdXRpbHMucHk=) | `74.17% <100%> (ø)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1568/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `95.21% <0%> (-1.6%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1568?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1568?src=pr&el=footer). Last update [82f6abd...a2c8c8e](https://codecov.io/gh/huggingface/transformers/pull/1568?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok, LGTM thanks"
] | 1,571 | 1,571 | 1,571 | CONTRIBUTOR | null | - Fix hanging when loading pretrained models from the cache without having internet access. This is a widespread issue on supercomputers whose internal compute nodes are firewalled. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1568/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1568",
"html_url": "https://github.com/huggingface/transformers/pull/1568",
"diff_url": "https://github.com/huggingface/transformers/pull/1568.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1568.patch",
"merged_at": 1571661117000
} |
https://api.github.com/repos/huggingface/transformers/issues/1567 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1567/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1567/comments | https://api.github.com/repos/huggingface/transformers/issues/1567/events | https://github.com/huggingface/transformers/pull/1567 | 509,420,268 | MDExOlB1bGxSZXF1ZXN0MzMwMDAwMzM2 | 1,567 | Added mixed precision (AMP) to inference benchmark | {
"login": "tlkh",
"id": 5409617,
"node_id": "MDQ6VXNlcjU0MDk2MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5409617?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tlkh",
"html_url": "https://github.com/tlkh",
"followers_url": "https://api.github.com/users/tlkh/followers",
"following_url": "https://api.github.com/users/tlkh/following{/other_user}",
"gists_url": "https://api.github.com/users/tlkh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tlkh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tlkh/subscriptions",
"organizations_url": "https://api.github.com/users/tlkh/orgs",
"repos_url": "https://api.github.com/users/tlkh/repos",
"events_url": "https://api.github.com/users/tlkh/events{/privacy}",
"received_events_url": "https://api.github.com/users/tlkh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1567?src=pr&el=h1) Report\n> Merging [#1567](https://codecov.io/gh/huggingface/transformers/pull/1567?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/079bfb32fba4f2b39d344ca7af88d79a3ff27c7c?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1567?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1567 +/- ##\n======================================\n Coverage 85.9% 85.9% \n======================================\n Files 91 91 \n Lines 13653 13653 \n======================================\n Hits 11728 11728 \n Misses 1925 1925\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1567?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1567?src=pr&el=footer). Last update [079bfb3...079bfb3](https://codecov.io/gh/huggingface/transformers/pull/1567?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Any reason why you kept the batch sizes so small? With a V100, you should be able to easily pull of a batch size of 64 for seq len 64. (Perhaps that's what the benchmark script uses as default, I don't know EDIT: yes, those are the values from the benchmark script. Not sure why, though.) I'm a bit surprised by the relatively small speed up. I've experienced **much** greater speed ups when using AMP, but that was on PyTorch with apex.",
"@BramVanroy this is for **inference** hence the emphasis on low batch size.",
"Oh, my bad. I was under the impression that the benchmark script included training profiling with PyProf. "
] | 1,571 | 1,572 | 1,572 | CONTRIBUTOR | null | I added a mixed precision option to the benchmark script and ran it on a DGX Station to get the results. As you can see, we can get between 1.2x to up to 4.5x inference speed depending on model, batch size and sequence length.
**Summary**
| Batch Size | Speedup (XLA only) | Speedup (XLA + AMP) | Min. Seq Len* |
| -------------- | --------------------------- | ------------------------------- | ------------------ |
| 1 | 1.1 ~ 1.9 | 1.4 ~ 2.9 | 512 |
| 2 | 1.1 ~ 1.9 | 1.4 ~ 3.4 | 256 |
| 4 | 1.1 ~ 2.1 | 1.2 ~ 3.8 | 128 |
| 8 | 1.1 ~ 3.1 | 1.2 ~ 4.5 | 64 |
*Min. Seq Len refers to minimum sequence length required to not see **any** performance regression at all. For example, at batch size 1:
* Seq Len of 512 tokens see speed up of 1.4~2.1x depending on model
* Seq Len of 256 tokens see speed up of 0.8~1.2x depending on model
Google Sheets with the results [here](https://docs.google.com/spreadsheets/d/1IW7Xbv-yfE8j-T0taqdyoSehca4mNcsyx6u0IXTzSJ4/edit#gid=0). GPU used is a single V100 (16GB). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1567/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1567",
"html_url": "https://github.com/huggingface/transformers/pull/1567",
"diff_url": "https://github.com/huggingface/transformers/pull/1567.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1567.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1566 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1566/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1566/comments | https://api.github.com/repos/huggingface/transformers/issues/1566/events | https://github.com/huggingface/transformers/issues/1566 | 509,419,560 | MDU6SXNzdWU1MDk0MTk1NjA= | 1,566 | error load bert model :not found model file | {
"login": "iambyd",
"id": 11927058,
"node_id": "MDQ6VXNlcjExOTI3MDU4",
"avatar_url": "https://avatars.githubusercontent.com/u/11927058?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iambyd",
"html_url": "https://github.com/iambyd",
"followers_url": "https://api.github.com/users/iambyd/followers",
"following_url": "https://api.github.com/users/iambyd/following{/other_user}",
"gists_url": "https://api.github.com/users/iambyd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iambyd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iambyd/subscriptions",
"organizations_url": "https://api.github.com/users/iambyd/orgs",
"repos_url": "https://api.github.com/users/iambyd/repos",
"events_url": "https://api.github.com/users/iambyd/events{/privacy}",
"received_events_url": "https://api.github.com/users/iambyd/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I don't understand when you get this error?",
"In order to understand when you've encountered this bug, as suggested by @iedmrc , you've to write down the source code that generates the bug! And please show your environment (Python, Transformers, PyTorch, TensorFlow versions) too! \r\n\r\n> Error content:\r\n> OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index'] found in directory ./uncased_L-12_H-768_A-12_transformers or `from_tf` set to False\r\n> but file exists\r\n> ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I got the same error when loading a TF BERT model: \r\n```\r\ndir = \"/Users/danielk/ideaProjects/farsi-language-models/src/models/perbert_L-12_H-768_A-12/\"\r\ntokenizer = BertTokenizer.from_pretrained(dir)\r\nconfig = BertConfig.from_json_file(dir + '/bert_config.json')\r\nmodel = TFBertForMaskedLM.from_pretrained(dir, config=config)\r\n```\r\nThe error happens in the last line. \r\n```\r\nTraceback (most recent call last):\r\n File \"6.2.try_tf_bert_transformers.py\", line 8, in <module>\r\n model = TFBertForMaskedLM.from_pretrained(dir, config=config)\r\n File \"/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py\", line 353, in from_pretrained\r\n [WEIGHTS_NAME, TF2_WEIGHTS_NAME], pretrained_model_name_or_path\r\nOSError: Error no file named ['pytorch_model.bin', 'tf_model.h5'] found in directory /Users/danielk/ideaProjects/farsi-language-models/src/models/perbert_L-12_H-768_A-12/ or `from_pt` set to False\r\n```"
] | 1,571 | 1,589 | 1,581 | NONE | null | Error content:
OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index'] found in directory ./uncased_L-12_H-768_A-12_transformers or `from_tf` set to False
but file exists

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1566/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1566/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1565 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1565/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1565/comments | https://api.github.com/repos/huggingface/transformers/issues/1565/events | https://github.com/huggingface/transformers/issues/1565 | 509,384,799 | MDU6SXNzdWU1MDkzODQ3OTk= | 1,565 | How to add the output word vector of bert to my model | {
"login": "e-tuanzi",
"id": 49483010,
"node_id": "MDQ6VXNlcjQ5NDgzMDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/49483010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-tuanzi",
"html_url": "https://github.com/e-tuanzi",
"followers_url": "https://api.github.com/users/e-tuanzi/followers",
"following_url": "https://api.github.com/users/e-tuanzi/following{/other_user}",
"gists_url": "https://api.github.com/users/e-tuanzi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-tuanzi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-tuanzi/subscriptions",
"organizations_url": "https://api.github.com/users/e-tuanzi/orgs",
"repos_url": "https://api.github.com/users/e-tuanzi/repos",
"events_url": "https://api.github.com/users/e-tuanzi/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-tuanzi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I think you can see [here](https://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/). In more details, this tutorial uses **BERT** as **feature extractor** and on-top they have used a **Logistic Regression** model from [Scikit-learn](https://scikit-learn.org/stable/) for the **sentiment analysis** task. \r\n\r\nQuestion: which is the problem in details? Are you not able to connect the feature vector extracted by BERT to a custom classifier on-top? Is the shape of the feature vector fixed?\r\n\r\n> ## Questions & Help\r\n> Hello, I am a student who is learning nlp.\r\n> Now I want to use the word vector output by bert to apply to my model, but **I can't connect the word vector to the network**. Could you give me an example program or tutorial about this which use textCNN or LSTM. You can sent e-mail to **[[email protected]](mailto:[email protected])** or reply me, please.\r\n> Thank you for your kind cooperation!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,582 | 1,582 | NONE | null | ## ❓ Questions & Help
Hello, I am a student who is learning nlp.
Now I want to use the word vector output by bert to apply to my model, but **I can't connect the word vector to the network**. Could you give me an example program or tutorial about this which use textCNN or LSTM. You can sent e-mail to **[email protected]** or reply me, please.
Thank you for your kind cooperation!
<!-- A clear and concise description of the question. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1565/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1564 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1564/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1564/comments | https://api.github.com/repos/huggingface/transformers/issues/1564/events | https://github.com/huggingface/transformers/issues/1564 | 509,380,922 | MDU6SXNzdWU1MDkzODA5MjI= | 1,564 | ALBERT: will it be supported? | {
"login": "xinqipony",
"id": 42603620,
"node_id": "MDQ6VXNlcjQyNjAzNjIw",
"avatar_url": "https://avatars.githubusercontent.com/u/42603620?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinqipony",
"html_url": "https://github.com/xinqipony",
"followers_url": "https://api.github.com/users/xinqipony/followers",
"following_url": "https://api.github.com/users/xinqipony/following{/other_user}",
"gists_url": "https://api.github.com/users/xinqipony/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinqipony/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinqipony/subscriptions",
"organizations_url": "https://api.github.com/users/xinqipony/orgs",
"repos_url": "https://api.github.com/users/xinqipony/repos",
"events_url": "https://api.github.com/users/xinqipony/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinqipony/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"https://github.com/brightmart/albert_zh",
"Please direct all your questions to the main albert topic. https://github.com/huggingface/transformers/issues/1370 \r\n\r\nPlease close this current topic. It does not add anything.",
"... We should extend the issue template and redirect all ALBERT questions to #1370 😂"
] | 1,571 | 1,571 | 1,571 | NONE | null | will you release an ALBERT model?
it sets the new state of art;
# 🌟New model addition
ALBERT: A LITE BERT FOR SELF-SUPERVISED
LEARNING OF LANGUAGE REPRESENTATIONS
https://arxiv.org/pdf/1909.11942.pdf
## Model description

<!-- Important information -->
## Open Source status
* [ ] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1564/reactions",
"total_count": 10,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1564/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1563 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1563/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1563/comments | https://api.github.com/repos/huggingface/transformers/issues/1563/events | https://github.com/huggingface/transformers/issues/1563 | 509,361,014 | MDU6SXNzdWU1MDkzNjEwMTQ= | 1,563 | The implementation of grad clipping is not correct when gradient accumulation is enabled | {
"login": "yangyiben",
"id": 6025986,
"node_id": "MDQ6VXNlcjYwMjU5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6025986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangyiben",
"html_url": "https://github.com/yangyiben",
"followers_url": "https://api.github.com/users/yangyiben/followers",
"following_url": "https://api.github.com/users/yangyiben/following{/other_user}",
"gists_url": "https://api.github.com/users/yangyiben/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yangyiben/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangyiben/subscriptions",
"organizations_url": "https://api.github.com/users/yangyiben/orgs",
"repos_url": "https://api.github.com/users/yangyiben/repos",
"events_url": "https://api.github.com/users/yangyiben/events{/privacy}",
"received_events_url": "https://api.github.com/users/yangyiben/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This should be fixed in https://github.com/huggingface/transformers/pull/1580",
"yeah @yangyiben please let me know if that merge fixes it",
"#1580 is now merged"
] | 1,571 | 1,572 | 1,572 | NONE | null | torch.nn.utils.clip_grad_norm_ should be applied before optimizer.step() not after each backward
## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....):
Language I am using the model on (English, Chinese....):
The problem arise when using:
* [ ] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS:
* Python version:
* PyTorch version:
* PyTorch Transformers version (or branch):
* Using GPU ?
* Distributed of parallel setup ?
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1563/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1562 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1562/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1562/comments | https://api.github.com/repos/huggingface/transformers/issues/1562/events | https://github.com/huggingface/transformers/issues/1562 | 509,120,090 | MDU6SXNzdWU1MDkxMjAwOTA= | 1,562 | training BERT on coreference resolution | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"There's a newer [approach](https://github.com/mandarjoshi90/coref) using BERT but it's using tensorflow 1.14. I wish if we could get this into hugginface.\r\n"
] | 1,571 | 1,597 | 1,577 | NONE | null | Hi
I really appreciate if you could add codes to train BERT on coref resolution dataset of CONLL-2012, thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1562/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1561 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1561/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1561/comments | https://api.github.com/repos/huggingface/transformers/issues/1561/events | https://github.com/huggingface/transformers/issues/1561 | 509,107,804 | MDU6SXNzdWU1MDkxMDc4MDQ= | 1,561 | [CLS] & [SEP] tokens missing in documentation | {
"login": "hawkeoni",
"id": 27156990,
"node_id": "MDQ6VXNlcjI3MTU2OTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/27156990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hawkeoni",
"html_url": "https://github.com/hawkeoni",
"followers_url": "https://api.github.com/users/hawkeoni/followers",
"following_url": "https://api.github.com/users/hawkeoni/following{/other_user}",
"gists_url": "https://api.github.com/users/hawkeoni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hawkeoni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hawkeoni/subscriptions",
"organizations_url": "https://api.github.com/users/hawkeoni/orgs",
"repos_url": "https://api.github.com/users/hawkeoni/repos",
"events_url": "https://api.github.com/users/hawkeoni/events{/privacy}",
"received_events_url": "https://api.github.com/users/hawkeoni/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@hawkeoni [CLS] and [SEP] tokens are added automatically as long as you use the tokenizer, ```BertTokenizer```",
"@enzoampil It doesn't seem to work.\r\nThe following code\r\n```python\r\nfrom transformers import BertTokenizer\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\nsentence = \"Hello there, General Kenobi!\"\r\nprint(tokenizer.encode(sentence))\r\nprint(tokenizer.cls_token_id, tokenizer.sep_token_id)\r\n```\r\nproduces the next output: \r\n\r\n[7592, 2045, 1010, 2236, 6358, 16429, 2072, 999]\r\n101 102\r\n\r\nAs you can see, cls and sep tokens are not in the list.",
"@hawkeoni please try ```print(tokenizer.encode(sentence, add_special_tokens=True))```",
"@enzoampil I'm sorry, but you're missing the point that the documentation is plainly wrong and misleading. That's why I asked whether the tokens should be added at all.",
"@hawkeoni Apologies, yes I did miss your point.\r\n\r\nIs this intentional or just a typo? **Looks like a typo since special tokens weren't added. Setting ```add_special_tokens=True``` should make this correct (will add this in).**\r\n\r\nDo I need to add [CLS] & [SEP] tokens when I fine tune base bert for sequence classification or token classification? **Yes, I believe this is currently handled by ```load_and_cache_examples``` in the sample training scripts (e.g. ```run_ner.py```)**",
"@enzoampil Thanks for your answer! If you plan on fixing this typo, please, fix it everywhere, so this issue never occurs again.\r\nYou can find it with \r\n```bash\r\ngrep -iR \"input_ids = torch.tensor(tokenizer.encode(\" .\r\n```\r\n",
"@hawkeoni thanks for the bash script reco. Ended up using it :)",
"Thank you both for that.\r\nPlease note that the special tokens will be added by default from now on (already on master and in the coming release)."
] | 1,571 | 1,572 | 1,571 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/blob/fd97761c5a977fd22df789d2851cf57c7c9c0930/transformers/modeling_bert.py#L1017-L1023
In this example of bert for token classification input sentence is encoded, but [CLS] & [SEP] tokens are not added. Is this intentional or just a typo?
Do I need to add [CLS] & [SEP] tokens when I fine tune base bert for sequence classification or token classification? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1561/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1560 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1560/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1560/comments | https://api.github.com/repos/huggingface/transformers/issues/1560/events | https://github.com/huggingface/transformers/issues/1560 | 509,033,024 | MDU6SXNzdWU1MDkwMzMwMjQ= | 1,560 | Finetuning OpenAI GPT-2 for another language. | {
"login": "0x01h",
"id": 32897657,
"node_id": "MDQ6VXNlcjMyODk3NjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/32897657?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0x01h",
"html_url": "https://github.com/0x01h",
"followers_url": "https://api.github.com/users/0x01h/followers",
"following_url": "https://api.github.com/users/0x01h/following{/other_user}",
"gists_url": "https://api.github.com/users/0x01h/gists{/gist_id}",
"starred_url": "https://api.github.com/users/0x01h/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/0x01h/subscriptions",
"organizations_url": "https://api.github.com/users/0x01h/orgs",
"repos_url": "https://api.github.com/users/0x01h/repos",
"events_url": "https://api.github.com/users/0x01h/events{/privacy}",
"received_events_url": "https://api.github.com/users/0x01h/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello, if you want to try and fine-tune GPT-2 to another language, you can just give the `run_lm_finetuning` script your text in the other language on which you want to fine-tune your model. \r\n\r\nHowever, please be aware that according to the language and its distance to the English language (language on which GPT-2 was pre-trained), you may find it hard to obtain good results. ",
"@0x01h\r\n\r\nGPT-2 can produce great results given a proper vocabulary. If you just run `run_lm_finetuning` on your lang dataset it will give you poor results, regardless of language distance from English because the vocab. \r\n\r\nI'd suggest that you train your tokenizer model first and then fine-tune GPT-2 with it. I'm doing that here \r\nhttps://github.com/mgrankin/ru_transformers\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,578 | 1,578 | NONE | null | ## ❓ Questions & Help
Hi,
Is there any option to finetune and use OpenAI GPT-2 for another language except English? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1560/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1559 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1559/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1559/comments | https://api.github.com/repos/huggingface/transformers/issues/1559/events | https://github.com/huggingface/transformers/issues/1559 | 509,011,557 | MDU6SXNzdWU1MDkwMTE1NTc= | 1,559 | Compatibility between DistilBert and Bert models | {
"login": "pvgladkov",
"id": 1284012,
"node_id": "MDQ6VXNlcjEyODQwMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1284012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pvgladkov",
"html_url": "https://github.com/pvgladkov",
"followers_url": "https://api.github.com/users/pvgladkov/followers",
"following_url": "https://api.github.com/users/pvgladkov/following{/other_user}",
"gists_url": "https://api.github.com/users/pvgladkov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pvgladkov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pvgladkov/subscriptions",
"organizations_url": "https://api.github.com/users/pvgladkov/orgs",
"repos_url": "https://api.github.com/users/pvgladkov/repos",
"events_url": "https://api.github.com/users/pvgladkov/events{/privacy}",
"received_events_url": "https://api.github.com/users/pvgladkov/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | ## ❓ Questions & Help
I have a regular classification task for sentences in russian language.
I used to train `BertForSequenceClassification` with pretrained Bert from [DeepPavlov](http://docs.deeppavlov.ai/en/master/features/models/bert.html) [RuBERT](http://files.deeppavlov.ai/deeppavlov_data/bert/rubert_cased_L-12_H-768_A-12_v2.tar.gz) (using PyTorch). Then I switched to `DistilBertForSequenceClassification`, but still using pretrained RuBert (because there is no pretrained DistilBert with russian language). And it worked.
Then after [this changing](https://github.com/huggingface/transformers/pull/1203/commits/465870c33fe4ade66863ca0edfe13616f9d24da5#diff-9dc1f6db4a89dbf13c19d02a9f27093dL178) there is impossible to load `DistilBertConfig` from `BertConfig` config.json. `DistilBertConfig` uses property decorator for compatibility between `DistilBertConfig` and `BertConfig`, that's why using setattr() causes error `AttributeError: can't set attribute`.
So my question is following: Is it a bug or feature? Is it OK to load DistilBert from pretrained Bert or not? Or maybe the best way for me is to distill RuBERT by myself using your script for distillation? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1559/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1558 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1558/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1558/comments | https://api.github.com/repos/huggingface/transformers/issues/1558/events | https://github.com/huggingface/transformers/issues/1558 | 508,983,477 | MDU6SXNzdWU1MDg5ODM0Nzc= | 1,558 | unable to parse E:/litao/bert/bert-base-cased\config.json as a URL or as a local path | {
"login": "754563116",
"id": 32032029,
"node_id": "MDQ6VXNlcjMyMDMyMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/32032029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/754563116",
"html_url": "https://github.com/754563116",
"followers_url": "https://api.github.com/users/754563116/followers",
"following_url": "https://api.github.com/users/754563116/following{/other_user}",
"gists_url": "https://api.github.com/users/754563116/gists{/gist_id}",
"starred_url": "https://api.github.com/users/754563116/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/754563116/subscriptions",
"organizations_url": "https://api.github.com/users/754563116/orgs",
"repos_url": "https://api.github.com/users/754563116/repos",
"events_url": "https://api.github.com/users/754563116/events{/privacy}",
"received_events_url": "https://api.github.com/users/754563116/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Did you find a solution?",
"> Did you find a solution?\r\n\r\nRename \"bert_config.json\" to \"config.json\"."
] | 1,571 | 1,622 | 1,577 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
train_file = 'E:/litao/bert/SQuAD 1.1/train-v1.1.json'
predict_file = 'E:/litao/bert/SQuAD 1.1/dev-v1.1.json'
model_type = 'bert'
model_name_or_path = 'E:/litao/bert/bert-base-cased'
output_dir = 'E:/litao/bert/transformers-master/examples/output'

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1558/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1557 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1557/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1557/comments | https://api.github.com/repos/huggingface/transformers/issues/1557/events | https://github.com/huggingface/transformers/issues/1557 | 508,889,984 | MDU6SXNzdWU1MDg4ODk5ODQ= | 1,557 | Tuning BERT on our own data set for multi-class classification problem | {
"login": "rshah1990",
"id": 37735152,
"node_id": "MDQ6VXNlcjM3NzM1MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/37735152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rshah1990",
"html_url": "https://github.com/rshah1990",
"followers_url": "https://api.github.com/users/rshah1990/followers",
"following_url": "https://api.github.com/users/rshah1990/following{/other_user}",
"gists_url": "https://api.github.com/users/rshah1990/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rshah1990/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rshah1990/subscriptions",
"organizations_url": "https://api.github.com/users/rshah1990/orgs",
"repos_url": "https://api.github.com/users/rshah1990/repos",
"events_url": "https://api.github.com/users/rshah1990/events{/privacy}",
"received_events_url": "https://api.github.com/users/rshah1990/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | I want to tune pre-trained BERT for multi-class classification with **6 million class, 30 million rows & highly imbalance data set.**
Can we tune BERT in batch of classes?
For example, I will take 15 classes (last layer will have only 15 neuron) and train my BERT model & in next batch use that trained model to train batch of next 15 classes, I just want to understand cons of this process. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1557/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1556 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1556/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1556/comments | https://api.github.com/repos/huggingface/transformers/issues/1556/events | https://github.com/huggingface/transformers/issues/1556 | 508,857,672 | MDU6SXNzdWU1MDg4NTc2NzI= | 1,556 | Does the function of 'evaluate()' change the result? | {
"login": "lceustc",
"id": 37608648,
"node_id": "MDQ6VXNlcjM3NjA4NjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/37608648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lceustc",
"html_url": "https://github.com/lceustc",
"followers_url": "https://api.github.com/users/lceustc/followers",
"following_url": "https://api.github.com/users/lceustc/following{/other_user}",
"gists_url": "https://api.github.com/users/lceustc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lceustc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lceustc/subscriptions",
"organizations_url": "https://api.github.com/users/lceustc/orgs",
"repos_url": "https://api.github.com/users/lceustc/repos",
"events_url": "https://api.github.com/users/lceustc/events{/privacy}",
"received_events_url": "https://api.github.com/users/lceustc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you specify what script you're running, with which parameters? Did you set a random seed?",
"> Could you specify what script you're running, with which parameters? Did you set a random seed?\r\n\r\nfor SEEDS in 99\r\n\r\ndo\r\nCUDA_VISIBLE_DEVICES=2 python run_glue.py \\\r\n --data_dir '/data/transformers/data/RTE/' \\\r\n --model_type 'roberta' \\\r\n --model_name_or_path '/data/transformers/examples/pretrained_model/roberta-mnli/' \\\r\n --task_name 'rte' \\\r\n --output_dir ./$SEEDS \\\r\n --overwrite_output_dir \\\r\n --max_seq_length 128 \\\r\n --do_train \\\r\n --do_eval \\\r\n --evaluate_during_training \\\r\n --per_gpu_train_batch_size 8 \\\r\n --per_gpu_eval_batch_size 8 \\\r\n --gradient_accumulation_steps 2 \\\r\n --learning_rate 1e-5 \\\r\n --num_train_epochs 10 \\\r\n --logging_steps 50 \\\r\n --save_steps -1 \\\r\n --seed $SEEDS \\\r\n\r\ndone\r\n\r\n\r\n",
"> Could you specify what script you're running, with which parameters? Did you set a random seed?\r\n\r\ni change 'roberta' to 'bert' and set the same seed, the result is also different, Is there any wrong with my shell script?"
] | 1,571 | 1,571 | 1,571 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
when i run RTE task , and logging steps=50: the result is:
gloabl_step 50: 0.8953
global_step 100: 0.8953
gloabl_step 150: 0.8916
global_step 200: 0.8736
but when logging steps =100:
global_step 100: 0.8953
global_step 200: 0.8880
when global_step is 200,what cause the difference? i didn't modify any code
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1556/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1555 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1555/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1555/comments | https://api.github.com/repos/huggingface/transformers/issues/1555/events | https://github.com/huggingface/transformers/pull/1555 | 508,808,738 | MDExOlB1bGxSZXF1ZXN0MzI5NTI5Mjk4 | 1,555 | Sample a constant number of tokens for masking in LM finetuning | {
"login": "rakeshchada",
"id": 2664691,
"node_id": "MDQ6VXNlcjI2NjQ2OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2664691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rakeshchada",
"html_url": "https://github.com/rakeshchada",
"followers_url": "https://api.github.com/users/rakeshchada/followers",
"following_url": "https://api.github.com/users/rakeshchada/following{/other_user}",
"gists_url": "https://api.github.com/users/rakeshchada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rakeshchada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rakeshchada/subscriptions",
"organizations_url": "https://api.github.com/users/rakeshchada/orgs",
"repos_url": "https://api.github.com/users/rakeshchada/repos",
"events_url": "https://api.github.com/users/rakeshchada/events{/privacy}",
"received_events_url": "https://api.github.com/users/rakeshchada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1555?src=pr&el=h1) Report\n> Merging [#1555](https://codecov.io/gh/huggingface/transformers/pull/1555?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fd97761c5a977fd22df789d2851cf57c7c9c0930?src=pr&el=desc) will **increase** coverage by `1.42%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1555?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1555 +/- ##\n==========================================\n+ Coverage 84.74% 86.16% +1.42% \n==========================================\n Files 91 91 \n Lines 13593 13593 \n==========================================\n+ Hits 11519 11713 +194 \n+ Misses 2074 1880 -194\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1555?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1555/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `81.75% <0%> (+1.35%)` | :arrow_up: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1555/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `95.45% <0%> (+2.27%)` | :arrow_up: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1555/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.28% <0%> (+2.46%)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1555/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `80.57% <0%> (+15.1%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1555/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (+17.02%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1555/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `92.95% <0%> (+83.09%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1555?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1555?src=pr&el=footer). Last update [fd97761...090cbd6](https://codecov.io/gh/huggingface/transformers/pull/1555?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks @rakeshchada. Closing as superseded by #1814 "
] | 1,571 | 1,573 | 1,573 | CONTRIBUTOR | null | For Masked LM fine-tuning, I think both the original BERT and RoBERTa implementations uniformly sample x number of tokens in *each* sequence for masking (where x = mlm_probability * 100 * sequence_length)
However, The current logic in run_lm_finetuning.py does an indepdendent sampling (from bernoulli distribution) for each token in the sequence. This leads to variance in the number of masked tokens (with the average number still close to x%).
The below example illustrates an extreme case, of the current logic, where no token in the input sequence is masked.
```
In [1]: import numpy as np
...: import torch
...: from transformers import BertTokenizer
...:
...: mlm_probability = 0.15
...: tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
...:
...: tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode('please mask me, o lord!', add_special_tokens=True))
...:
...: input_ids = tokenizer.convert_tokens_to_ids(tokens)
...:
...: inputs = torch.Tensor([input_ids])
...:
...: labels = inputs.clone()
...:
...: probability_matrix = torch.full(labels.shape, mlm_probability)
...:
...: special_tokens_mask = [tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()]
...: probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0)
...: masked_indices = torch.bernoulli(probability_matrix).bool()
...:
...:
In [2]: masked_indices
Out[2]: tensor([[False, False, False, False, False, False, False, False, False]])
```
This PR modifies the logic so the percentage of masked tokens is constant (at x).
Separately, the existing and the new masking logic both rely on boolean tensors of pytorch.
So, this also updates README to include the minimum pytorch version needed. (1.2.0) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1555/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1555",
"html_url": "https://github.com/huggingface/transformers/pull/1555",
"diff_url": "https://github.com/huggingface/transformers/pull/1555.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1555.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1554 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1554/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1554/comments | https://api.github.com/repos/huggingface/transformers/issues/1554/events | https://github.com/huggingface/transformers/issues/1554 | 508,786,635 | MDU6SXNzdWU1MDg3ODY2MzU= | 1,554 | GPT2 not in modeltype | {
"login": "tuhinjubcse",
"id": 3104771,
"node_id": "MDQ6VXNlcjMxMDQ3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3104771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuhinjubcse",
"html_url": "https://github.com/tuhinjubcse",
"followers_url": "https://api.github.com/users/tuhinjubcse/followers",
"following_url": "https://api.github.com/users/tuhinjubcse/following{/other_user}",
"gists_url": "https://api.github.com/users/tuhinjubcse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuhinjubcse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuhinjubcse/subscriptions",
"organizations_url": "https://api.github.com/users/tuhinjubcse/orgs",
"repos_url": "https://api.github.com/users/tuhinjubcse/repos",
"events_url": "https://api.github.com/users/tuhinjubcse/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuhinjubcse/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @tuhinjubcse gpt2 is a text generation model. If you look in the run_glue.py file you will see your options for model selection for using the run_glue.py script.\r\n```\r\nMODEL_CLASSES = {\r\n 'bert': (BertConfig, BertForSequenceClassification, BertTokenizer),\r\n 'xlnet': (XLNetConfig, XLNetForSequenceClassification, XLNetTokenizer),\r\n 'xlm': (XLMConfig, XLMForSequenceClassification, XLMTokenizer),\r\n 'roberta': (RobertaConfig, RobertaForSequenceClassification, RobertaTokenizer),\r\n 'distilbert': (DistilBertConfig, DistilBertForSequenceClassification, DistilBertTokenizer)\r\n}\r\n```",
"gpt2 is a transformer model. Why would it be limited to only generation ??",
"It's not that you can't use it for classification, etc. It's that you would need to make a few changes to the code and model. #1248 . Right now the changes are not made for gpt2. Generally speaking, people use gpt2 for text generation.",
"Autoregressive models are not as good as mlms, at classification tasks. You should check masked language models (mlm) or similars. But it's not impossible, Open AI has shown some example use cases on original [GPT paper](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf). Check the figure 1 and page 6 for more details.\r\nAlso, this is not a bug but just not implemented. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,581 | 1,581 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): GPT2
Language I am using the model on (English, Chinese....): ENGLISH
The problem arise when using:
* [ ] the official example scripts: (give details) run_glue.py
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name) MRPC GLUE
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1.
2.
3.
python ./examples/run_glue.py --model_type gpt2 --model_name_or_path gp
t2 --task_name MRPC --do_train --do_eval --do_lower_case --data_dir ./fake --max_seq_length 512 --per_gpu_eval_batc
h_size=8 --per_gpu_train_batch_size=8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/bot/
10/18/2019 00:22:28 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False,
16-bits training: False
Traceback (most recent call last):
File "./examples/run_glue.py", line 541, in <module>
main()
File "./examples/run_glue.py", line 476, in main
config_class, model_class, tokenizer_class = MODEL_CLASSES[args.model_type]
KeyError: 'gpt2'
## Environment
* OS: UBUNTU LINUX
* Python version: 3.7
* PyTorch version: LATEST
* PyTorch Transformers version (or branch): LATEST
* Using GPU ? YES
* Distributed of parallel setup ? NO
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1554/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1553 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1553/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1553/comments | https://api.github.com/repos/huggingface/transformers/issues/1553/events | https://github.com/huggingface/transformers/pull/1553 | 508,736,364 | MDExOlB1bGxSZXF1ZXN0MzI5NDc1MjUz | 1,553 | Add speed log to examples/run_squad.py | {
"login": "WilliamTambellini",
"id": 109458,
"node_id": "MDQ6VXNlcjEwOTQ1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/109458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WilliamTambellini",
"html_url": "https://github.com/WilliamTambellini",
"followers_url": "https://api.github.com/users/WilliamTambellini/followers",
"following_url": "https://api.github.com/users/WilliamTambellini/following{/other_user}",
"gists_url": "https://api.github.com/users/WilliamTambellini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WilliamTambellini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WilliamTambellini/subscriptions",
"organizations_url": "https://api.github.com/users/WilliamTambellini/orgs",
"repos_url": "https://api.github.com/users/WilliamTambellini/repos",
"events_url": "https://api.github.com/users/WilliamTambellini/events{/privacy}",
"received_events_url": "https://api.github.com/users/WilliamTambellini/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1553?src=pr&el=h1) Report\n> Merging [#1553](https://codecov.io/gh/huggingface/transformers/pull/1553?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fd97761c5a977fd22df789d2851cf57c7c9c0930?src=pr&el=desc) will **increase** coverage by `1.42%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1553?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1553 +/- ##\n==========================================\n+ Coverage 84.74% 86.16% +1.42% \n==========================================\n Files 91 91 \n Lines 13593 13593 \n==========================================\n+ Hits 11519 11713 +194 \n+ Misses 2074 1880 -194\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1553?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1553/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `81.75% <0%> (+1.35%)` | :arrow_up: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1553/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `95.45% <0%> (+2.27%)` | :arrow_up: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1553/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.28% <0%> (+2.46%)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1553/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `80.57% <0%> (+15.1%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1553/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (+17.02%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1553/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `92.95% <0%> (+83.09%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1553?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1553?src=pr&el=footer). Last update [fd97761...0919389](https://codecov.io/gh/huggingface/transformers/pull/1553?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"up ?",
"Why not, indeed. Ok to merge it."
] | 1,571 | 1,572 | 1,572 | CONTRIBUTOR | null | Add a speed estimate log (time per example)
for evaluation to examples/run_squad.py | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1553/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1553",
"html_url": "https://github.com/huggingface/transformers/pull/1553",
"diff_url": "https://github.com/huggingface/transformers/pull/1553.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1553.patch",
"merged_at": 1572948793000
} |
https://api.github.com/repos/huggingface/transformers/issues/1552 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1552/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1552/comments | https://api.github.com/repos/huggingface/transformers/issues/1552/events | https://github.com/huggingface/transformers/issues/1552 | 508,694,853 | MDU6SXNzdWU1MDg2OTQ4NTM= | 1,552 | There is not space after generating an 'special token' and the next word using gpt2. | {
"login": "fabrahman",
"id": 22799593,
"node_id": "MDQ6VXNlcjIyNzk5NTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/22799593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fabrahman",
"html_url": "https://github.com/fabrahman",
"followers_url": "https://api.github.com/users/fabrahman/followers",
"following_url": "https://api.github.com/users/fabrahman/following{/other_user}",
"gists_url": "https://api.github.com/users/fabrahman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fabrahman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fabrahman/subscriptions",
"organizations_url": "https://api.github.com/users/fabrahman/orgs",
"repos_url": "https://api.github.com/users/fabrahman/repos",
"events_url": "https://api.github.com/users/fabrahman/events{/privacy}",
"received_events_url": "https://api.github.com/users/fabrahman/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Can you show exactly how you ran ```run_generation.py```?",
"@enzoampil thanks, this is my command:\r\n```python run_generation.py --model_type=gpt2 --model_name_or_path=gpt2_finetuned/ --top_k 10 --temperature 0.8 --top_p 0.0 --stop_token \"<|endoftext|>\" ```",
"Can you try running ```python run_generation.py --model_type=gpt2 --model_name_or_path=gpt2_finetuned/ --top_k 10 --temperature 0.8 --top_p 0.0``` and post the output here. \r\n\r\nIf the output looks fine at this point, just exclude the ```stop_token``` argument when running ```run_generation.py```",
"I see the problem. At the decoding step using ```tokenizer.decode```, we set ```skip_special_tokens=True```.\r\n\r\nIn other words, special tokens are removed at the decoding step, so you the ```stop_token``` argument should not be a special token.",
"@enzoampil \r\n1) I tried without --stop_token and the output is the same.\r\n2) I don't think ``` skip_special_tokens=True ``` is the problem, since I had already set that to **False**.\r\nDid you mean --stop_token is a problem? Am I doing something wrong?",
"Can you try to set ```clean_up_tokenization_spaces=False```",
"Yeah I actually tried that one too and still the output doesn't change. @enzoampil ",
"Would you mind sending over your model files? Can't seem to replicate the issue.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Is it solved?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I know this is old, but I'm having the same issue (and I've never had it before). It was introduced after I added a Tokenizers `BPEDecoder` to my tokenizer. I'm using special tokens as a way of logically parsing my (non-language) input, and I need the special tokens in the output so that another stage of processing can use them for understanding the structure of the prediction. But there are no spaces between my special tokens and the next word. It's not a huge deal, I suppose, because I could fix it in post-processing, but I'd like to know what's up.\r\n\r\n[EDIT] Just to note; in my case it has nothing to do with prediction. I'm just testing the encode/decode of my tokenizer and noticing these spaces missing in the decoded output."
] | 1,571 | 1,649 | 1,582 | NONE | null | ## ❓ Questions & Help
Hi,
I have used ```run_lm_finetuning.py``` to finetune gpt2 and then tried to do some generation. I have added a couple of special token to dictionary and the finetuned gpt2 without any problem.
Then when I am doing the generation using ```run_generation.py```, I realized whenever the model generates a special_token , it clubs it with the next generated token. For example, consider [SEP] is a special token and this is an output:
[SEP]and it has been a very , ....
And this happens with all of my special_tokens. That is if the next token of an special token isn't an special token, then there is no space between them.
Does anyone know the reason?
Best | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1552/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1551 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1551/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1551/comments | https://api.github.com/repos/huggingface/transformers/issues/1551/events | https://github.com/huggingface/transformers/pull/1551 | 508,640,194 | MDExOlB1bGxSZXF1ZXN0MzI5Mzk1MzAy | 1,551 | [FIX] fix repetition penalty in `examples/run_generation.py` | {
"login": "leo-du",
"id": 15058481,
"node_id": "MDQ6VXNlcjE1MDU4NDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/15058481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leo-du",
"html_url": "https://github.com/leo-du",
"followers_url": "https://api.github.com/users/leo-du/followers",
"following_url": "https://api.github.com/users/leo-du/following{/other_user}",
"gists_url": "https://api.github.com/users/leo-du/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leo-du/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leo-du/subscriptions",
"organizations_url": "https://api.github.com/users/leo-du/orgs",
"repos_url": "https://api.github.com/users/leo-du/repos",
"events_url": "https://api.github.com/users/leo-du/events{/privacy}",
"received_events_url": "https://api.github.com/users/leo-du/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1551?src=pr&el=h1) Report\n> Merging [#1551](https://codecov.io/gh/huggingface/transformers/pull/1551?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c5441946112e68441b46866d114bf8d3c29b0c1d?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1551?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1551 +/- ##\n=======================================\n Coverage 86.16% 86.16% \n=======================================\n Files 91 91 \n Lines 13593 13593 \n=======================================\n Hits 11713 11713 \n Misses 1880 1880\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1551?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1551?src=pr&el=footer). Last update [c544194...4f05239](https://codecov.io/gh/huggingface/transformers/pull/1551?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hello! That's great, thank you!"
] | 1,571 | 1,571 | 1,571 | CONTRIBUTOR | null | The repetition penalty in `examples/run_generation.py` is incorrectly implemented due to the following snippet.
```python
for _ in set(generated):
next_token_logits[_] /= repetition_penalty
```
`generated` is a tensor, and python built-in `set` does not compare tensors correctly, e.g.:
```python
>>> import torch
>>> set(torch.cat([torch.arange(2),torch.arange(3)]))
{tensor(0), tensor(1), tensor(1), tensor(0), tensor(2)}
```
This PR fixes this subtle error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1551/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1551",
"html_url": "https://github.com/huggingface/transformers/pull/1551",
"diff_url": "https://github.com/huggingface/transformers/pull/1551.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1551.patch",
"merged_at": 1571338035000
} |
https://api.github.com/repos/huggingface/transformers/issues/1550 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1550/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1550/comments | https://api.github.com/repos/huggingface/transformers/issues/1550/events | https://github.com/huggingface/transformers/issues/1550 | 508,619,252 | MDU6SXNzdWU1MDg2MTkyNTI= | 1,550 | training BERT from scratch for native language PT-BR? Without init weight | {
"login": "calusbr",
"id": 25322394,
"node_id": "MDQ6VXNlcjI1MzIyMzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/25322394?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/calusbr",
"html_url": "https://github.com/calusbr",
"followers_url": "https://api.github.com/users/calusbr/followers",
"following_url": "https://api.github.com/users/calusbr/following{/other_user}",
"gists_url": "https://api.github.com/users/calusbr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/calusbr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/calusbr/subscriptions",
"organizations_url": "https://api.github.com/users/calusbr/orgs",
"repos_url": "https://api.github.com/users/calusbr/repos",
"events_url": "https://api.github.com/users/calusbr/events{/privacy}",
"received_events_url": "https://api.github.com/users/calusbr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | I would like to train BERT from scratch for a textual base in PT-BR (8GB data). Is it possible to use the run_lm_finetuning.py code to perform this process without using the multi-language bert model?
I already have a vocab.txt for the PT-BR base and I don't want to load initial weights.
Is there any script or tutorial to perform this process step by step? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1550/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1549 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1549/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1549/comments | https://api.github.com/repos/huggingface/transformers/issues/1549/events | https://github.com/huggingface/transformers/pull/1549 | 508,546,747 | MDExOlB1bGxSZXF1ZXN0MzI5MzE4OTM4 | 1,549 | Fix token order in xlnet preprocessing for SQuAD | {
"login": "hlums",
"id": 16907204,
"node_id": "MDQ6VXNlcjE2OTA3MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/16907204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hlums",
"html_url": "https://github.com/hlums",
"followers_url": "https://api.github.com/users/hlums/followers",
"following_url": "https://api.github.com/users/hlums/following{/other_user}",
"gists_url": "https://api.github.com/users/hlums/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hlums/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hlums/subscriptions",
"organizations_url": "https://api.github.com/users/hlums/orgs",
"repos_url": "https://api.github.com/users/hlums/repos",
"events_url": "https://api.github.com/users/hlums/events{/privacy}",
"received_events_url": "https://api.github.com/users/hlums/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1549?src=pr&el=h1) Report\n> Merging [#1549](https://codecov.io/gh/huggingface/transformers/pull/1549?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8a62835577a2a93642546858b21372e43c1a1ff8?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1549?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1549 +/- ##\n=======================================\n Coverage 85.14% 85.14% \n=======================================\n Files 94 94 \n Lines 13920 13920 \n=======================================\n Hits 11852 11852 \n Misses 2068 2068\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1549?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1549?src=pr&el=footer). Last update [8a62835...9a3b173](https://codecov.io/gh/huggingface/transformers/pull/1549?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This is great, thanks @hlums.\r\nI'm adding information on the runs/results in the example's readme and merging.",
"@ShengeBelmendo I think we set them to the right values here as well (cf lines 309-310 in `run_squad`).",
"Ok LGTM, merging this PR, thanks @hlums ",
"@thomwolf oh, I'm sorry for missing that.\r\n\r\nAnd another small question. @hlums Have you tested whether set the `do_lower_case` option or not ? It seems that we shouldn't set this option cuz the model is the cased version and it also didn't be set in official code.\r\n\r\nI'm trying to reproduce the results of xlnet but obviously there are some problems especially for squad2.0, it's still a long way to go.",
"> @thomwolf oh, I'm sorry for missing that.\r\n> \r\n> And another small question. @hlums Have you tested whether set the `do_lower_case` option or not ? It seems that we shouldn't set this option cuz the model is the cased version and it also didn't be set in official code.\r\n> \r\n> I'm trying to reproduce the results of xlnet but obviously there are some problems especially for squad2.0, it's still a long way to go.\r\n\r\nYou are right. I shouldn't have set the do_lower_case flag. I can give it a try some time this week. \r\nHave you tried the latest code on squad 2.0? I thought put the cls token at the right place would help a lot because it's used for unanswerable question classification. ",
"> > @thomwolf oh, I'm sorry for missing that.\r\n> > And another small question. @hlums Have you tested whether set the `do_lower_case` option or not ? It seems that we shouldn't set this option cuz the model is the cased version and it also didn't be set in official code.\r\n> > I'm trying to reproduce the results of xlnet but obviously there are some problems especially for squad2.0, it's still a long way to go.\r\n> \r\n> You are right. I shouldn't have set the do_lower_case flag. I can give it a try some time this week.\r\n> Have you tried the latest code on squad 2.0? I thought put the cls token at the right place would help a lot because it's used for unanswerable question classification.\r\n\r\n@hlums Sorry for late reply. The latest code doesn't work, you will get a f1 score closed to 0 on unanswerable questions. But I have found the reason. \r\n\r\nThe following is a piece of code in forward function of xlnet model, which obviously is the key point of training the model on unanswerable questions using cls token representations. But the default value of tensor `is_impossible`(using to indicate whether this example is answerable) is none, and we also hadn't passed this tensor into forward function. That's the problem. \r\n\r\n``` \r\n if cls_index is not None and is_impossible is not None:\r\n # Predict answerability from the representation of CLS and START\r\n cls_logits = self.answer_class(hidden_states, start_positions=start_positions, cls_index=cls_index)\r\n loss_fct_cls = nn.BCEWithLogitsLoss()\r\n cls_loss = loss_fct_cls(cls_logits, is_impossible)\r\n total_loss += cls_loss * 0.5\r\n```\r\n\r\nI added the `is_impossible` tensor to TensorDataset and model inputs, and got a reasonable result, f1: 84, EM: 80. Maybe I can creat a PR for this, maybe after I find more discrepancies and get better results. I'm working hard to reproduce the results of xlnet on squad2.0, so I hope you can tell me if you have some new ideas or finds.Thanks! ",
"@hlums before your fix:\r\n`xlnet-large-cased`, SQuAD 1.1, 2 epochs, MSL: 512, BS: 48\r\n{\r\n \"exact\": 75.01419110690634,\r\n \"f1\": 82.13017516396678,\r\n \"total\": 10570,\r\n \"HasAns_exact\": 75.01419110690634,\r\n \"HasAns_f1\": 82.13017516396678,\r\n \"HasAns_total\": 10570\r\n}\r\nPost fix, awesome:\r\n{\r\n \"exact\": 85.1371807000946,\r\n \"f1\": 92.24219729313499,\r\n \"total\": 10570,\r\n \"HasAns_exact\": 85.1371807000946,\r\n \"HasAns_f1\": 92.24219729313499,\r\n \"HasAns_total\": 10570\r\n}\r\nThanks again for your fix!\r\n\r\n`xlnet-large-cased`, SQuAD 2.0, max_steps: 8000, MSL: 512, BS: 48\r\n{\r\n \"exact\": 40.95005474606249,\r\n \"f1\": 45.305949189220875,\r\n \"total\": 11873,\r\n \"HasAns_exact\": 81.96693657219973,\r\n \"HasAns_f1\": 90.69121705864026,\r\n \"HasAns_total\": 5928,\r\n \"NoAns_exact\": 0.050462573591253154,\r\n \"NoAns_f1\": 0.050462573591253154,\r\n \"NoAns_total\": 5945\r\n}\r\n\r\nHopefully `xlnet-large-cased` on _SQuAD 2.0_ for the holidays: @ShengeBelmendo https://github.com/huggingface/transformers/pull/1803\r\nLimited what I can contribute to the code/logic, but I can run tests 24 x 7.",
"@ShengeBelmendo I tried turning do_lower_case off, but the model performance didn't change much.\r\n\r\n\r\nSorry I'm not actively working QA anymore so probably won't be able to contribute to the improvement on squad 2.0\r\n\r\nAnother thing mentioned in the XLNet paper is layer-wise learning rate decay. I actually tried implementing it, but it didn't help with the performance on 1.1 for me. See #1198 \r\n\r\nThe pre-processing code in the XLNet repo also looks much more complicated than here. I'm not sure if it has anything to do with the performance discrepancy though. \r\n",
"@hlums ok, tks again. I will check more carefully about the pre-processing part and maybe read all the official code for comparison."
] | 1,571 | 1,573 | 1,572 | CONTRIBUTOR | null | #947
My current result on SQuAD 1.1
{
"exact": 85.45884578997162,
"f1": 92.5974600601065,
"total": 10570,
"HasAns_exact": 85.45884578997162,
"HasAns_f1": 92.59746006010651,
"HasAns_total": 10570
}
My code validation command
```
python /data/home/hlu/transformers/examples/run_squad.py \
--model_type xlnet \
--model_name_or_path xlnet-large-cased \
--do_train \
--do_eval \
--do_lower_case \
--train_file /data/home/hlu/notebooks/NLP/examples/question_answering/train-v1.1.json \
--predict_file /data/home/hlu/notebooks/NLP/examples/question_answering/dev-v1.1.json \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./wwm_cased_finetuned_squad/ \
--per_gpu_eval_batch_size=4 \
--per_gpu_train_batch_size=4 \
--save_steps 5000
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1549/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1549/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1549",
"html_url": "https://github.com/huggingface/transformers/pull/1549",
"diff_url": "https://github.com/huggingface/transformers/pull/1549.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1549.patch",
"merged_at": 1572878236000
} |
https://api.github.com/repos/huggingface/transformers/issues/1548 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1548/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1548/comments | https://api.github.com/repos/huggingface/transformers/issues/1548/events | https://github.com/huggingface/transformers/pull/1548 | 508,539,560 | MDExOlB1bGxSZXF1ZXN0MzI5MzEzMzU4 | 1,548 | [2.2] - Command-line interface - Pipeline class | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1548?src=pr&el=h1) Report\n> Merging [#1548](https://codecov.io/gh/huggingface/transformers/pull/1548?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33adab2b91697b3e78af618a21ab9f1176281165?src=pr&el=desc) will **decrease** coverage by `1.44%`.\n> The diff coverage is `44.59%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1548?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1548 +/- ##\n==========================================\n- Coverage 81.47% 80.03% -1.45% \n==========================================\n Files 122 128 +6 \n Lines 18342 19325 +983 \n==========================================\n+ Hits 14945 15467 +522 \n- Misses 3397 3858 +461\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1548?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/commands/download.py](https://codecov.io/gh/huggingface/transformers/pull/1548/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbW1hbmRzL2Rvd25sb2FkLnB5) | `0% <0%> (ø)` | |\n| [transformers/commands/run.py](https://codecov.io/gh/huggingface/transformers/pull/1548/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbW1hbmRzL3J1bi5weQ==) | `0% <0%> (ø)` | |\n| [transformers/commands/train.py](https://codecov.io/gh/huggingface/transformers/pull/1548/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbW1hbmRzL3RyYWluLnB5) | `0% <0%> (ø)` | |\n| [transformers/commands/convert.py](https://codecov.io/gh/huggingface/transformers/pull/1548/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbW1hbmRzL2NvbnZlcnQucHk=) | `0% <0%> (ø)` | |\n| [transformers/tests/model\\_card\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1548/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsX2NhcmRfdGVzdC5weQ==) | `97.5% <100%> (ø)` | :arrow_up: |\n| [transformers/data/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/1548/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvX19pbml0X18ucHk=) | `100% <100%> (ø)` | :arrow_up: |\n| [transformers/data/processors/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/1548/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvcHJvY2Vzc29ycy9fX2luaXRfXy5weQ==) | `100% <100%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1548/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `93.06% <100%> (+1.37%)` | :arrow_up: |\n| [transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/1548/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvcHJvY2Vzc29ycy91dGlscy5weQ==) | `19.37% <12.5%> (-25.53%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1548/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2F1dG8ucHk=) | `32.55% <19.04%> (-5.08%)` | :arrow_down: |\n| ... and [34 more](https://codecov.io/gh/huggingface/transformers/pull/1548/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1548?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1548?src=pr&el=footer). Last update [33adab2...db0795b](https://codecov.io/gh/huggingface/transformers/pull/1548?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,571 | 1,586 | 1,576 | MEMBER | null | Adding a `Pipeline` class that encapsulates a `Tokenizer` and a `Model`.
`Pipelines` take python objects as inputs (lists/dict of string/int/float) and output python objects as well (lists/dict of string/int/float).
`Pipelines` can be used to query and train models and should be framework agnostic (default to TF 2.0 if installed, fallback to PyTorch).
ex:
```python
# load/initialize a text classification model from Bert-base-uncased
pipeline = TextClassificationPipeline.from_pretrained('bert-base-uncased')
# Train the text classification model with lists of strings and associated labels
pipeline.fit(list_of_texts, list_of_labels)
# Predict with the trained classification model
# (input: list of strings, output: list of int)
batched_predictions = pipeline(list_of_texts)
```
Also adding a simple CLI based on these pipeline models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1548/reactions",
"total_count": 6,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/1548/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1548",
"html_url": "https://github.com/huggingface/transformers/pull/1548",
"diff_url": "https://github.com/huggingface/transformers/pull/1548.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1548.patch",
"merged_at": 1576852110000
} |
https://api.github.com/repos/huggingface/transformers/issues/1547 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1547/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1547/comments | https://api.github.com/repos/huggingface/transformers/issues/1547/events | https://github.com/huggingface/transformers/issues/1547 | 508,527,704 | MDU6SXNzdWU1MDg1Mjc3MDQ= | 1,547 | Is it possible/is there a plan to enable continued pretraining? | {
"login": "oligiles0",
"id": 20796307,
"node_id": "MDQ6VXNlcjIwNzk2MzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/20796307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oligiles0",
"html_url": "https://github.com/oligiles0",
"followers_url": "https://api.github.com/users/oligiles0/followers",
"following_url": "https://api.github.com/users/oligiles0/following{/other_user}",
"gists_url": "https://api.github.com/users/oligiles0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oligiles0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oligiles0/subscriptions",
"organizations_url": "https://api.github.com/users/oligiles0/orgs",
"repos_url": "https://api.github.com/users/oligiles0/repos",
"events_url": "https://api.github.com/users/oligiles0/events{/privacy}",
"received_events_url": "https://api.github.com/users/oligiles0/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @oligiles0, you can actually use ```run_lm_finetuning.py``` for this. You can find more details in the **RoBERTa/BERT and masked language modeling** section in the README",
"> Hi @oligiles0, you can actually use `run_lm_finetuning.py` for this. You can find more details in the **RoBERTa/BERT and masked language modeling** section in the README\r\n\r\nThanks very much @enzoampil . Is there a reason this uses a single text file as opposed to taking a folder of text files? I wouldn't want to combine multiple documents because some chunks will then cross documents and interfere with training, but I also wouldn't want to rerun the script for individual documents. ",
"> Thanks very much @enzoampil . Is there a reason this uses a single text file as opposed to taking a folder of text files? I wouldn't want to combine multiple documents because some chunks will then cross documents and interfere with training, but I also wouldn't want to rerun the script for individual documents.\r\n\r\nPlease check https://github.com/huggingface/transformers/issues/1896#issuecomment-557222822\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,581 | 1,581 | NONE | null | ## 🚀 Feature
Standardised interface to pretrain various Transformers with standardised expectations with regards to formatting training data.
## Motivation
To achieve state of the art within a given domain it is not sufficient to take models pretrained on nonspecific literature (wikipedia/books/etc). The ideal situation would be able to leverage all the compute put into this training and then further train on domain literature before fine tuning on a specific task. The great strength of this library is having a standard interface to use new SOTA models and it would be very helpful if this was extended to include further pretraining to help rapidly push domain SOTAs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1547/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1547/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1546 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1546/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1546/comments | https://api.github.com/repos/huggingface/transformers/issues/1546/events | https://github.com/huggingface/transformers/issues/1546 | 508,470,322 | MDU6SXNzdWU1MDg0NzAzMjI= | 1,546 | Q / Note: BERT Masked-LM fails to predict last token in sequence if it is not punctuation | {
"login": "IngoMarquart",
"id": 44617909,
"node_id": "MDQ6VXNlcjQ0NjE3OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/44617909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IngoMarquart",
"html_url": "https://github.com/IngoMarquart",
"followers_url": "https://api.github.com/users/IngoMarquart/followers",
"following_url": "https://api.github.com/users/IngoMarquart/following{/other_user}",
"gists_url": "https://api.github.com/users/IngoMarquart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IngoMarquart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IngoMarquart/subscriptions",
"organizations_url": "https://api.github.com/users/IngoMarquart/orgs",
"repos_url": "https://api.github.com/users/IngoMarquart/repos",
"events_url": "https://api.github.com/users/IngoMarquart/events{/privacy}",
"received_events_url": "https://api.github.com/users/IngoMarquart/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"If you play with the script a bit, you can see that the loss for BERT with the MLM head is actually quite high, as someone suggested elsewhere, this may be due to pre-training on different tasks than just MLM",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | ## ❓ Questions & Help
I am playing with BERT to see what the distributions of the prediction for a MASK token are. I wrote a quick script that successively masks all words in an input sequence.
This is based on the implementation in the examples (e.g. the lm finetuning script and the examples in the documentation).
In doing so, I found out that BERT generally fails to predict the last word in the sentence if it is not punctuation. With overwhelmingly high likelihood, BERT expects a normal SVO sentence to end with a full stop. While it can predict the correct word (the correct word usually appears in the top 10 most likely tokens), the likelihood as given by softmax is very low.
So by itself this is perhaps not surprising, because the large majority of examples in pre-training will have punctuation, especially if pre-training is not just the MLM but also the sentence prediction.
But I wonder if it should be best-practice to ensure every sentence is punctuated? If the MLM part of BERT consistently predicts punctuation, then a sentence without it will not be efficiently classified compared to one with punctuation, even on downstream tasks, right?
One thing to confirm, of course, would be that this is not an issue of the pyTorch implementation and how attention masks the <SEP> token or something?
What do you think?
Attached is the script, you should just be able to run it.
[lm_test.py.txt](https://github.com/huggingface/transformers/files/3739434/lm_test.py.txt)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1546/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1545 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1545/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1545/comments | https://api.github.com/repos/huggingface/transformers/issues/1545/events | https://github.com/huggingface/transformers/issues/1545 | 508,457,835 | MDU6SXNzdWU1MDg0NTc4MzU= | 1,545 | Adding new tokens to uncased tokenizers - case insensitivity is lost | {
"login": "mbednarski",
"id": 13330503,
"node_id": "MDQ6VXNlcjEzMzMwNTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/13330503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbednarski",
"html_url": "https://github.com/mbednarski",
"followers_url": "https://api.github.com/users/mbednarski/followers",
"following_url": "https://api.github.com/users/mbednarski/following{/other_user}",
"gists_url": "https://api.github.com/users/mbednarski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbednarski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbednarski/subscriptions",
"organizations_url": "https://api.github.com/users/mbednarski/orgs",
"repos_url": "https://api.github.com/users/mbednarski/repos",
"events_url": "https://api.github.com/users/mbednarski/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbednarski/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hmm looks like BertTokenizer's super class handles `.add_tokens()` and the first steps of `.tokenize()`, and doesn't really seem to consider whether the tokens should be made lowercase. I'm not sure whether it's intentional, but I'll make a PR and find out :smile: \r\n\r\nIn the meantime, it might be a good idea to manually lowercase your text before tokenization.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | ## ❓ Questions & Help
Hello!
I'm trying to add new tokens to bert-base-uncased. Let's say my token is '**cool-token**' and it was not present in the original vocab
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
print(tokenizer.tokenize('Sentence with cool-token'))
```
Prints as expected:
`['sentence', 'with', 'cool', '-', 'token']`
Now I add this token
`tokenizer.add_tokens(['cool-token'])`
Once again, prints as expected:
`['sentence', 'with', 'cool-token']`
However, when I try to utilize case-insensitivity my new token seems to be not recognized:
`print(tokenizer.tokenize('SenTenCE wIth cOOl-token'))`
prints
`['sentence', 'with', 'cool', '-', 'token']`
I would expect:
`['sentence', 'with', 'cool-token']`
It seems that custom tokens are not lowercased. Is it expected behavior and I have to `.lower()` my text manually or am I doing something wrong?
Anyway, I <3 your library
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1545/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1544 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1544/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1544/comments | https://api.github.com/repos/huggingface/transformers/issues/1544/events | https://github.com/huggingface/transformers/issues/1544 | 508,414,669 | MDU6SXNzdWU1MDg0MTQ2Njk= | 1,544 | the num_labels in run_squad | {
"login": "zhujun5164",
"id": 49580602,
"node_id": "MDQ6VXNlcjQ5NTgwNjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/49580602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhujun5164",
"html_url": "https://github.com/zhujun5164",
"followers_url": "https://api.github.com/users/zhujun5164/followers",
"following_url": "https://api.github.com/users/zhujun5164/following{/other_user}",
"gists_url": "https://api.github.com/users/zhujun5164/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhujun5164/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhujun5164/subscriptions",
"organizations_url": "https://api.github.com/users/zhujun5164/orgs",
"repos_url": "https://api.github.com/users/zhujun5164/repos",
"events_url": "https://api.github.com/users/zhujun5164/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhujun5164/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @zhujun5164, 2 is the right setting of num_labels for the task. If you look at the model they use (say Bert is BertForQuestionAnswering), you'll see that they get two outputs for each position which is from the num_labels = 2. The two outputs correspond to the start_logits position and the end_logits position.\r\n\r\n```\r\n outputs = self.bert(input_ids,\r\n attention_mask=attention_mask,\r\n token_type_ids=token_type_ids,\r\n position_ids=position_ids, \r\n head_mask=head_mask)\r\n\r\n sequence_output = outputs[0]\r\n\r\n logits = self.qa_outputs(sequence_output)\r\n start_logits, end_logits = logits.split(1, dim=-1)\r\n start_logits = start_logits.squeeze(-1)\r\n end_logits = end_logits.squeeze(-1)\r\n```\r\n\r\nDoes that make sense?",
"thanks @cformosa, and i find that in the untils_squad.py(341-357) the start_position and end_position have defined as a position number in the sentence (may be in token by word piece), it means that the run_squad predict the position number of start_position and end_position. The split(1, dim = -1) in your copy code is split half of the data in the last dim, so it easier to make me misunderstand it have predict one-hot of the start_position and end_position.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | ## ❓ Questions & Help
In the run_squad, I have not find any code to define the num_labels. In the modeling_utils, the num_labels have been default as 2, but in the Question & Answer task it may predict the start_position and end_position in the inputs. Is there the code have missing consider the reset of num_labels, or 2 is the right setting of num_labels for the task?
<!-- A clear and concise description of the question. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1544/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1544/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1543 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1543/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1543/comments | https://api.github.com/repos/huggingface/transformers/issues/1543/events | https://github.com/huggingface/transformers/issues/1543 | 508,297,036 | MDU6SXNzdWU1MDgyOTcwMzY= | 1,543 | Where is pytorch-pretrained-BERT? | {
"login": "ShallTearchen",
"id": 25740395,
"node_id": "MDQ6VXNlcjI1NzQwMzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/25740395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShallTearchen",
"html_url": "https://github.com/ShallTearchen",
"followers_url": "https://api.github.com/users/ShallTearchen/followers",
"following_url": "https://api.github.com/users/ShallTearchen/following{/other_user}",
"gists_url": "https://api.github.com/users/ShallTearchen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShallTearchen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShallTearchen/subscriptions",
"organizations_url": "https://api.github.com/users/ShallTearchen/orgs",
"repos_url": "https://api.github.com/users/ShallTearchen/repos",
"events_url": "https://api.github.com/users/ShallTearchen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShallTearchen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`pytorch-pretrained-BERT` is this library, but four or five months ago. It evolved into `pytorch-transformers` as more models were added to the library, before becoming `transformers` as we now have a front-end for both pytorch and tensorflow.",
"is this still an issue?",
"I don't think so. In my opinion, @ShallTearchen would know where it goes pytorch-pretrained-BERT, but I think that @LysandreJik explains very well the transformation of this library to Transformers.\r\nIn my opinion, I'll close this \"issue\"!\r\n\r\n> is this still an issue?"
] | 1,571 | 1,575 | 1,575 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
As the title shows, where is pytorch-pretrained-BERT? Please tell me the path, THX. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1543/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1542 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1542/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1542/comments | https://api.github.com/repos/huggingface/transformers/issues/1542/events | https://github.com/huggingface/transformers/issues/1542 | 508,281,913 | MDU6SXNzdWU1MDgyODE5MTM= | 1,542 | Running CTRL Model On Google Colab Environment | {
"login": "yusufani",
"id": 35346311,
"node_id": "MDQ6VXNlcjM1MzQ2MzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/35346311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yusufani",
"html_url": "https://github.com/yusufani",
"followers_url": "https://api.github.com/users/yusufani/followers",
"following_url": "https://api.github.com/users/yusufani/following{/other_user}",
"gists_url": "https://api.github.com/users/yusufani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yusufani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yusufani/subscriptions",
"organizations_url": "https://api.github.com/users/yusufani/orgs",
"repos_url": "https://api.github.com/users/yusufani/repos",
"events_url": "https://api.github.com/users/yusufani/events{/privacy}",
"received_events_url": "https://api.github.com/users/yusufani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"[In the official repo,](https://github.com/salesforce/ctrl) you can find a 'lower_memory' branch. You can take a look there. As always, you can try to make the batch size and max sequence length smaller, too.",
"Thank you for your help",
"Please close this topic if you have no further questions. "
] | 1,571 | 1,571 | 1,571 | NONE | null | ## ❓ Questions & Help
As you know Google Colab environment has ****12 GB** ram** limit . When i want to run [run_generation.py](https://github.com/huggingface/transformers/blob/master/examples/run_generation.py) file , Colab automatically stops the process. How much ram does the CTRL Model need? Where can I learn that? Or is there another way to run it?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1542/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1541 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1541/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1541/comments | https://api.github.com/repos/huggingface/transformers/issues/1541/events | https://github.com/huggingface/transformers/issues/1541 | 508,229,991 | MDU6SXNzdWU1MDgyMjk5OTE= | 1,541 | Type of model for each GLUE task | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"All models on GLUE should use BertForSequenceClassification (MNLI is 3 class, STS-B is 1 class).",
"As specified in the documentation, for `XxxForSequenceClassification` models:\r\n\r\n```\r\nIf ``config.num_labels == 1`` a regression loss is computed (Mean-Square loss),\r\nIf ``config.num_labels > 1`` a classification loss is computed (Cross-Entropy).\r\n```\r\n\r\nSo you can use `BertForSequenceClassification` for a regression task such as STS-B."
] | 1,571 | 1,573 | 1,573 | NONE | null | ## ❓ Questions & Help
There are nine GLUE tasks, and I wanted to verify which BERT model type is best suited for each task. Can anyone confirm these matchings? I am not sure what to do for STS-B especially, and am unsure if BertForMultipleChoice is perhaps the correct option for MNLI.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1541/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1540 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1540/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1540/comments | https://api.github.com/repos/huggingface/transformers/issues/1540/events | https://github.com/huggingface/transformers/issues/1540 | 508,168,936 | MDU6SXNzdWU1MDgxNjg5MzY= | 1,540 | Should the option to run on TPU in run_glue.py use some sort of xla data parallelizer ? | {
"login": "Santosh-Gupta",
"id": 5524261,
"node_id": "MDQ6VXNlcjU1MjQyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Santosh-Gupta",
"html_url": "https://github.com/Santosh-Gupta",
"followers_url": "https://api.github.com/users/Santosh-Gupta/followers",
"following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions",
"organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs",
"repos_url": "https://api.github.com/users/Santosh-Gupta/repos",
"events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Indeed, it would be great to improve the current TPU script to include better optimization, such as using the TPU DataParallel from Pytorch. We haven't gotten to it yet and we'll probably do so soon.\r\n\r\nWe'd be very happy to welcome a PR too! :)",
"Sounds good, I'm trying to figure it out. The part I'm stuck on is that the colab TPU examples use a multi-threading approach, but the official API recommends using multi-processing over multi-threading, so I am wonder if the TPUs require a multi-threading approach. \r\n\r\nOnce I figure this out I think I can update the code. I asked a question on the xla repo about this\r\nhttps://github.com/pytorch/xla/issues/1217\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | CONTRIBUTOR | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
The xla API ( https://github.com/pytorch/xla/blob/master/API_GUIDE.md ) and the TPU colab examples ( https://github.com/pytorch/xla/tree/master/contrib/colab ) each parallelize their data, either using a `torch_xla.distributed.parallel_loader.c` object or a `torch_xla.distributed.data_parallel.DataParallel` object (which uses `ParallelLoader`).
The `run_glue.py` example (https://github.com/huggingface/transformers/blob/master/examples/run_glue.py ) doesn't do this. I am wondering if there was a reason not to use xla's data parallelizers. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1540/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1539 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1539/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1539/comments | https://api.github.com/repos/huggingface/transformers/issues/1539/events | https://github.com/huggingface/transformers/issues/1539 | 508,166,767 | MDU6SXNzdWU1MDgxNjY3Njc= | 1,539 | A couple of noob-to-transformers questions | {
"login": "GrahamboJangles",
"id": 36944031,
"node_id": "MDQ6VXNlcjM2OTQ0MDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/36944031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GrahamboJangles",
"html_url": "https://github.com/GrahamboJangles",
"followers_url": "https://api.github.com/users/GrahamboJangles/followers",
"following_url": "https://api.github.com/users/GrahamboJangles/following{/other_user}",
"gists_url": "https://api.github.com/users/GrahamboJangles/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GrahamboJangles/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GrahamboJangles/subscriptions",
"organizations_url": "https://api.github.com/users/GrahamboJangles/orgs",
"repos_url": "https://api.github.com/users/GrahamboJangles/repos",
"events_url": "https://api.github.com/users/GrahamboJangles/events{/privacy}",
"received_events_url": "https://api.github.com/users/GrahamboJangles/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@GrahamboJangles\r\n\r\n1) If you want to provide _context_ into any model offered by Transformers, you can **fine-tune** the model you've chosen with your custom data in such a way that the model can learn the context. \r\nI suggest you to look the _CTRL_ model developed by SalesForce. By using this model, you can pass a parameter called _control_code_ which specify the domain of the generated text by the model itself, for example Fitness, Funny, Diet, etc. [Here](https://github.com/huggingface/transformers/blob/master/transformers/modeling_ctrl.py) you can find the source code of the model implemented in Transformers, and [here](https://arxiv.org/pdf/1909.05858.pdf) you can find the scientific paper about it.\r\n\r\n2) **Yes**, you can train the model you've chosen from scratch. You can import it with the pre-trained weights, unfreeze all layers and set random weights and starting to train the model.\r\n\r\n3) There are many Python scripts written in TensorFlow 2.0 or PyTorch for fine-tuning the model. The from-scratch training scripts are missing, at the moment (in my best knowledge). You can find maybe some advide on the _Issues_ page.\r\n\r\n4) **Yes**, and I think that in the Python scripts in this library they suggest to you to use a particular set of models for a certain task. In more details, they have developed the base model architecture, and they have added some layers on-top for addressing a particular task, e.g. you can use _BertForTokenClassification_, _RobertaForTokenClassification_, _DistilBertForTokenClassification_, _CamembertForTokenClassification_ for token classification. More details [here](https://github.com/huggingface/transformers/blob/master/examples/run_ner.py).\r\n\r\n> ## Questions & Help\r\n> #### If you don't want to or don't know the answer to all of these, just answer some that you know!\r\n> 1. How is it that you can provide context to these models? Say, if you want to summarize or pull data from a text. Do you have to train it on that text or just put it somehow in the prompt?\r\n> 2. Can you train each model? Like, if you wanted to, completely unfreeze and retrain each model?\r\n> 3. How can I train? I ran into a bunch of issues including [this one](https://github.com/huggingface/transformers/issues/1517) when just running the official sample scripts in Colab.\r\n> 4. Are certain models better at certain tasks? Which ones are good for what?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,582 | 1,582 | NONE | null | ## ❓ Questions & Help
#### If you don't want to or don't know the answer to all of these, just answer some that you know!
1. How is it that you can provide context to these models? Say, if you want to summarize or pull data from a text. Do you have to train it on that text or just put it somehow in the prompt?
2. Can you train each model? Like, if you wanted to, completely unfreeze and retrain each model?
4. How can I train? I ran into a bunch of issues including [this one](https://github.com/huggingface/transformers/issues/1517) when just running the official sample scripts in Colab.
5. Are certain models better at certain tasks? Which ones are good for what? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1539/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1538 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1538/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1538/comments | https://api.github.com/repos/huggingface/transformers/issues/1538/events | https://github.com/huggingface/transformers/issues/1538 | 508,140,503 | MDU6SXNzdWU1MDgxNDA1MDM= | 1,538 | Fine-tune RoBERTa on WikiText-2 | {
"login": "estoica111",
"id": 56658382,
"node_id": "MDQ6VXNlcjU2NjU4Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/56658382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/estoica111",
"html_url": "https://github.com/estoica111",
"followers_url": "https://api.github.com/users/estoica111/followers",
"following_url": "https://api.github.com/users/estoica111/following{/other_user}",
"gists_url": "https://api.github.com/users/estoica111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/estoica111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/estoica111/subscriptions",
"organizations_url": "https://api.github.com/users/estoica111/orgs",
"repos_url": "https://api.github.com/users/estoica111/repos",
"events_url": "https://api.github.com/users/estoica111/events{/privacy}",
"received_events_url": "https://api.github.com/users/estoica111/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, could you give us a bit more information? For example, you seem to be running this on a GPU, are you running on a distributed setting? Could you list your software versions (python, torch, transformers)?",
"Thank you for your response. I am running on a single machine with one gpu,\nPython 3.6.8, pytorch_transformers 1.2.0 (from setup.py), torch>=1.0.0\n(from requirements.txt). Linux 4.15.0-1044-gcp, NVIDIA-SMI 418.40.04,\nDriver Version: 418.40.04, CUDA Version: 10.1. Thank you for your help.\n\nOn Thu, Oct 17, 2019 at 11:26 AM Lysandre Debut <[email protected]>\nwrote:\n\n> Hi, could you give us a bit more information? For example, you seem to be\n> running this on a GPU, are you running on a distributed setting? Could you\n> list your software versions (python, torch, transformers)?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/1538?email_source=notifications&email_token=ANQITTRRWMQRRYU7CBCTFXLQPCU6ZA5CNFSM4JBR7JA2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBRCGXQ#issuecomment-543302494>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ANQITTTS3UTD4XDKO5MGVTTQPCU6ZANCNFSM4JBR7JAQ>\n> .\n>\n",
"Does the error still happen if you remove `CUDA_LAUNCH_BLOCKING=1` ?",
"yes, the error happens at\nFile \"run_lm_finetuning.py\", line 472, in main\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\n File \"run_lm_finetuning.py\", line 209, in train\n outputs = model(inputs, masked_lm_labels=labels) if args.mlm else\nmodel(inputs, labels=labels)\n.......................(other info) .......\nresult = self.forward(*input, **kwargs)\noutput = input.matmul(weight.t())\nRuntimeError: cublas runtime error : resource allocation failed at\n/pytorch/aten/src/THC/THCGeneral.cpp:216\nEpoch: 0%|\n\n | 0/1 [00:00<?, ?it/s]\nIteration: 0%|\nThank you.\n\nOn Thu, Oct 17, 2019 at 1:46 PM Lysandre Debut <[email protected]>\nwrote:\n\n> Does the error still happen if you remove CUDA_LAUNCH_BLOCKING=1 ?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/1538?email_source=notifications&email_token=ANQITTS7JZTEFDMUUEHJ3NDQPDFJTA5CNFSM4JBR7JA2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBRO53Q#issuecomment-543354606>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ANQITTWFFITTOT7DILCJQMDQPDFJTANCNFSM4JBR7JAQ>\n> .\n>\n",
"I have also noticed this issue when trying to fine-tune a RoBERTa language model.\r\n\r\nPart of the issue appears to be in the the calculation of the maximum sequence length in `run_lm_finetuning.py`\r\n\r\n```\r\nif args.block_size <= 0:\r\n args.block_size = tokenizer.max_len_single_sentence # Our input block size will be the max possible for the model\r\n```\r\n\r\nThis produces a cached file like this: `cached_lm_999999999998_wiki.train.raw`\r\nManually checking shows that it is indeed setting the `args.block_size` parameter to 999999999998\r\n\r\nAdding the `--block-size = 512` argument prevents this, but then leads to a similar index error to the one @estoica111 is experiencing.\r\n\r\nStrangely, if I reduce to `--block-size = 500`, the model trains successfully, but the reported perplexity on the test set seems far too low:\r\n\r\n```\r\n10/18/2019 15:35:44 - INFO - __main__ - Saving features into cached file ~/wikitext-2-raw/cached_lm_500_wiki.test.raw\r\n10/18/2019 15:35:44 - INFO - __main__ - ***** Running evaluation *****\r\n10/18/2019 15:35:44 - INFO - __main__ - Num examples = 572\r\n10/18/2019 15:35:44 - INFO - __main__ - Batch size = 32\r\nEvaluating: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 18/18 [00:08<00:00, 2.14it/s]\r\n10/18/2019 15:35:53 - INFO - __main__ - ***** Eval results *****\r\n10/18/2019 15:35:53 - INFO - __main__ - perplexity = tensor(1.0631)\r\n```\r\n\r\n**Update:** I get the exact same perplexity (1.0631) even with the standard pre-trained RoBERTa model on wikitext-2-raw test set. Very confused.",
"I'm having a hard time replicating this error in transformers 2.1.1. Would it be possible for you to try this on the latest version and let me know your results? \r\n\r\nI get a 1.03 perplexity fine-tuning on `wiki.train.raw` and evaluating on `wiki.test.raw`, vs 1.45 without fine-tuning.",
"@LysandreJik, I was on 2.1.1, but just in case I did a full-reinstall of the environment from master and that seems to have fixed the perplexity issue (now getting 1.03 - 1.06 after finetuning in `wiki.train.raw`.\r\n\r\nHowever, the default behavior for block_size still does not work with the provided example. I have to set `block_size 500`, or I get the errors I described above. `block_size 512` also still produces a similar error to @estoica111 .\r\n\r\n```\r\n/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [171,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nEvaluating: 0%| | 0/18 [00:00<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"run_lm_finetuning.py\", line 543, in <module>\r\n main()\r\n File \"run_lm_finetuning.py\", line 535, in main\r\n result = evaluate(args, model, tokenizer, prefix=prefix)\r\n File \"run_lm_finetuning.py\", line 315, in evaluate\r\n outputs = model(batch, masked_lm_labels=batch) if args.mlm else model(batch, labels=batch)\r\n File \"/home/dscripka/software/venv_transformers/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 547, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/dscripka/software/transformers/transformers/modeling_roberta.py\", line 242, in forward\r\n head_mask=head_mask)\r\n File \"/home/dscripka/software/venv_transformers/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 547, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/dscripka/software/transformers/transformers/modeling_roberta.py\", line 182, in forward\r\n head_mask=head_mask)\r\n File \"/home/dscripka/software/transformers/transformers/modeling_bert.py\", line 627, in forward\r\n head_mask=head_mask)\r\n File \"/home/dscripka/software/venv_transformers/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 547, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/dscripka/software/transformers/transformers/modeling_bert.py\", line 348, in forward\r\n layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i])\r\n File \"/home/dscripka/software/venv_transformers/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 547, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/dscripka/software/transformers/transformers/modeling_bert.py\", line 326, in forward\r\n attention_outputs = self.attention(hidden_states, attention_mask, head_mask)\r\n File \"/home/dscripka/software/venv_transformers/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 547, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/dscripka/software/transformers/transformers/modeling_bert.py\", line 283, in forward\r\n self_outputs = self.self(input_tensor, attention_mask, head_mask)\r\n File \"/home/dscripka/software/venv_transformers/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 547, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/dscripka/software/transformers/transformers/modeling_bert.py\", line 202, in forward\r\n mixed_query_layer = self.query(hidden_states)\r\n File \"/home/dscripka/software/venv_transformers/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 547, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/dscripka/software/venv_transformers/lib/python3.6/site-packages/torch/nn/modules/linear.py\", line 87, in forward\r\n return F.linear(input, self.weight, self.bias)\r\n File \"/home/dscripka/software/venv_transformers/lib/python3.6/site-packages/torch/nn/functional.py\", line 1371, in linear\r\n output = input.matmul(weight.t())\r\nRuntimeError: cublas runtime error : resource allocation failed at /pytorch/aten/src/THC/THCGeneral.cpp:216\r\n```\r\n\r\nSoftware versions:\r\n\r\nPython: 3.6.5\r\nTransformers: 2.1.1 (master)\r\nCuda: 10.0\r\nTorch: 1.2.0",
"Maybe this can be caused by `RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows`. I got the same error on cuda. But trying to compute a single iteration on CPU, I get more clear error description: `RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows`.",
"@LysandreJik I am trying to fine-tune roberta following in the examples for using run_lm_finetuning.py. The only change I am making is using gradient accumulation as 2 and a gpu batch size of 2 as I was running into cuda memory issues. I am using the raw wiki data from the link provided.\r\n\r\nI did a fresh install and have these on aws:\r\nPython: 3.6.5\r\nTransformers: 2.1.1 (master)\r\nCuda: 10.0\r\nTorch: 1.2.0\r\n1 V100 GPU\r\n\r\nAfter fine-tuning on roberta-large I am getting a perplexity of 2.88 and when I do it on roberta-base I am getting a perplexity of 3.4. \r\n\r\nDo you have any ideas on what I might be doing wrong or my setup or possible solutions?",
"> /pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [386,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n> \r\n> Debugging, I saw it fails to get an embedding that exceeds the max size, but I am not sure in which module to correct. Also, I assume this should have run correctly given that it is the dataset used in the example at https://huggingface.co/transformers/examples.html.\r\n> Any help is greatly appreciated.\r\n> Thanks.\r\n\r\nI encountered essentially the same error when using RoBERTa for SQuAD.\r\n\r\nWhat I found was that the Tokenizer.encode_plus() generates a token_type_ids vector that contains 1s and 0s when two sequences are fed in (question and passage tokens in the case of SQuAD).\r\n\r\nThe RobertaModel tries to look up these indices in RobertaModel.embeddings.token_type_embeddings. However, the size of the token_type_embeddings is [1,768] and so the error that started this issue arises when it tries to look up the index 1.\r\n\r\nI think one solution would be to set token_type_ids to None in the forward method of RobertaModel",
"Also having this issue training RoBERTa on MNLI. Similar to @brandenchan's observations, if I set the `token_type_ids` to all 0, then I don't have a a problem, but if I use `encode_plus` to generate the segment ids, then it triggers that error.\r\n\r\nAdditionally, it seems like `RobertaConfig` sets `type_vocab_size=2`, which seems like it should handle multiple segment ids? But the segment embeddings (currently) only have space for 1.",
"> Maybe this can be caused by `RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows`. I got the same error on cuda. But trying to compute a single iteration on CPU, I get more clear error description: `RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows`.\r\n\r\nThis is pretty weird as I was getting the same error when running the bert_lm_finetuning, I guess it's because the sentence's length is greater than 512, but as in the script TextDataset truncation is done via the parameter block_size, so this isn't supposed to appear... I set block_size<511(510,500...) and the error's gone.",
"From what read in [this thread](https://github.com/huggingface/transformers/issues/1234), it seems the cause for the issue @shreydesai points to is the absence of pre-trained token_type_id beyond a single [1, 768] parameter (explains why passing 0 doesn't trigger index out of range). The thread above offers a hack to get around this (i.e. modifying this parameter ad hoc) if multi-segment inputs are a must (which _is_ the case in my task).\r\n\r\nTo make this more useful, the hack snippet is (credit: [Colanim](https://github.com/Colanim))\r\n```\r\nmodel = RobertaModel.from_pretrained('roberta-base')\r\nmodel.config.type_vocab_size = 2\r\nsingle_emb = model.embeddings.token_type_embeddings\r\nmodel.embeddings.token_type_embeddings = torch.nn.Embedding(2, single_emb.embedding_dim)\r\nmodel.embeddings.token_type_embeddings.weight = torch.nn.Parameter(single_emb.weight.repeat([2, 1]))\r\n```\r\nIf a headed model wrapper is used (e.g. RobertaForSequenceClassification), add .roberta after model to modify the RobertaModel object in the wrapper.\r\nHaving experimented in my classifier, I can contribute one evidence point that it doesn't break anything and works as intended.",
"In my case (using ver 2.3), [this hard coded padding_idx](https://github.com/huggingface/transformers/blob/a436574bfde4f75f518a107f45f987579d813ce5/transformers/modeling_roberta.py#L48) caused the problem.\r\nIf `position_ids=None ^ seq_length = 512`, the max value of position_ids exceeds 511 [here](https://github.com/huggingface/transformers/blob/a436574bfde4f75f518a107f45f987579d813ce5/transformers/modeling_roberta.py#L62-L66), which is the largest index the embedding matrix can use.\r\n\r\nThe code in the latest version is different from the one above, but **setting position_ids manually** fixed the problem for me.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,586 | 1,586 | NONE | null | ## ❓ Questions & Help
I am trying to train Roberta using the run_lm_finetuning.py script and TRAIN_FILE=wiki.train.raw, TEST_FILE=wiki.test.raw, basically, I use the demo data (wikiText-2) as described at https://huggingface.co/transformers/examples.html
CUDA_LAUNCH_BLOCKING=1 python run_lm_finetuning.py \
--output_dir=output \
--model_type=roberta \
--model_name_or_path=roberta-base \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE \
--mlm
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [386,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Debugging, I saw it fails to get an embedding that exceeds the max size, but I am not sure in which module to correct. Also, I assume this should have run correctly given that it is the dataset used in the example at https://huggingface.co/transformers/examples.html.
Any help is greatly appreciated.
Thanks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1538/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1537 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1537/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1537/comments | https://api.github.com/repos/huggingface/transformers/issues/1537/events | https://github.com/huggingface/transformers/issues/1537 | 508,006,416 | MDU6SXNzdWU1MDgwMDY0MTY= | 1,537 | Behavior of Masked-LM BERT, dependence on masked token | {
"login": "IngoMarquart",
"id": 44617909,
"node_id": "MDQ6VXNlcjQ0NjE3OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/44617909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IngoMarquart",
"html_url": "https://github.com/IngoMarquart",
"followers_url": "https://api.github.com/users/IngoMarquart/followers",
"following_url": "https://api.github.com/users/IngoMarquart/following{/other_user}",
"gists_url": "https://api.github.com/users/IngoMarquart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IngoMarquart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IngoMarquart/subscriptions",
"organizations_url": "https://api.github.com/users/IngoMarquart/orgs",
"repos_url": "https://api.github.com/users/IngoMarquart/repos",
"events_url": "https://api.github.com/users/IngoMarquart/events{/privacy}",
"received_events_url": "https://api.github.com/users/IngoMarquart/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, this is nice illustration of the discrepancy between Bert's training (in which masked tokens are provided) and Bert's testing (in which no masked token is provided).",
"I've noticed that this also frequently occurs when the last token in the sentence is masked. \r\n\r\nFor example,\r\n`['[CLS]', 'donald', 'trump', 'is', 'the', 'president', 'of', 'the', 'united', '[MASK]', '[SEP]']`\r\nis predicted as\r\n`['.', 'donald', 'trump', 'is', 'the', 'president', 'of', 'the', 'united', '.', '.']`\r\n\r\nBut if we mask a middle token, anything besides the last, then it works well:\r\n`['[CLS]', 'donald', 'trump', 'is', 'the', 'president', 'of', 'the', '[MASK]', 'states', '[SEP]']`\r\nis predicted as\r\n`['.', 'donald', 'trump', 'is', 'the', 'president', 'of', 'the', 'united', 'states', '.']`\r\n\r\n@thomwolf Any insight on why this is the case?\r\n",
"We are also interested to work with the prediction of each token (not just the masked ones) and are wondering what is happening there. \r\n\r\nAs you closed this issue, @IngoMarquart , have you found an explanation, or some link, which sheds more light on this?",
"Hey guys, \r\nI would like to investigate how good the top k predictions are. But I don't know how you can generate more than one prediction. \r\n\r\nCan anyone help with this? \r\n"
] | 1,571 | 1,586 | 1,571 | NONE | null | I am experimenting with the masked-LM for BERT to understand how the masking affects predictions of the other tokens.
Of course using no [MASK] is not the intended usage, nor is it to predict each token in the sentence. But my understanding is that the LM head is a separate softmax classifier, taking the final embeddings of BERT for the whole sequence as an input. Therefore the model outputs predictions for all tokens, including the masked token.
I would have expected, that when no token is masked, the prediction should be pretty much perfect. The embedding of any word is then a function of both context and its own position-adjusted initial encoding.
If a token is masked, however, BERT essentially needs to predict it from context and position.
Interestingly, I have run across a sentence
(Donald Trump is the president of the United States), where the first word is not predicted whenever no [MASK] token is set, but is predicted correctly if a later token is masked.
Consider the sequence without any masking
`['[CLS]', 'donald', 'trump', 'is', 'the', 'president', 'of', 'the', 'united', 'states', '[SEP]']`
The output of the masked LM model is
`['.', '.', 'trump', 'is', 'the', 'president', 'of', 'the', 'united', 'states', '.']`
where the first token is missing
If we input the sequence
`['[CLS]', 'donald', 'trump', 'is', 'the', '[MASK]', 'of', 'the', 'united', 'states', '[SEP]']`
Then the output is correctly
`['.', 'donald', 'trump', 'is', 'the', 'president', 'of', 'the', 'united', 'states', '.']`
But this behavior also occurs with masking: further experimentation shows that the position of the [MASK] token determines whether the sentence is correctly predicted. If the [MASK] is early in the sequence (position 2-4 in this case), the first word is mispredicted. If it is later, after position 4, then the first token is predicted correctly.
There are other sentences, where this "problem" does not occur (for example if the sentence starts with "the").
I am trying to understand this behavior. Why does BERT fail prediction of the first non-masked token in some cases, in particular, when no token is masked and the model should have "full information"?
Am I misunderstanding the model or the implementation?
Attached is a small example based on the github readme that replicates this behavior
[lm_test.py.txt](https://github.com/huggingface/transformers/files/3735740/lm_test.py.txt)
Edit: In case you are wondering why the heck I would want to do this. I am working with a model that uses (part of the) logits from the LM head repeatedly for different positions. The corpus is fixed. So the correct way would be to run the LM each time, but if I could run BERT instead once for every sequence in the corpus and save the relevant predictions, it would save a lot of time.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1537/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1537/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1536 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1536/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1536/comments | https://api.github.com/repos/huggingface/transformers/issues/1536/events | https://github.com/huggingface/transformers/issues/1536 | 507,824,313 | MDU6SXNzdWU1MDc4MjQzMTM= | 1,536 | Penalize high confident false negative classifications? | {
"login": "callenilsson",
"id": 16915094,
"node_id": "MDQ6VXNlcjE2OTE1MDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/16915094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/callenilsson",
"html_url": "https://github.com/callenilsson",
"followers_url": "https://api.github.com/users/callenilsson/followers",
"following_url": "https://api.github.com/users/callenilsson/following{/other_user}",
"gists_url": "https://api.github.com/users/callenilsson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/callenilsson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/callenilsson/subscriptions",
"organizations_url": "https://api.github.com/users/callenilsson/orgs",
"repos_url": "https://api.github.com/users/callenilsson/repos",
"events_url": "https://api.github.com/users/callenilsson/events{/privacy}",
"received_events_url": "https://api.github.com/users/callenilsson/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The softmax function specifically uses exponentiation to exacerbate the differences in scores (to get the soft 'max'). You can normalize scores by other means than a softmax.\r\n\r\nRelated to your title: using a log loss will penalize wrong predictions with high confidence more (e.g. BCE).",
"Related read is the section entitled \"Don’t Mistake Class Probabilities for Confidence\" here: https://www.inovex.de/blog/uncertainty-quantification-deep-learning/",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,577 | 1,577 | NONE | null | ## ❓ Questions & Help
I added the line `logits = torch.nn.functional.softmax(logits)` to convert binary classifications to a confidence score between 0.0 - 1.0. However, the predictions are very harsh being really close to either 0.0 or 1.0 and not somewhere in between. Is there a way to penalize the model from being so categorical? I especially want to minimize high confidence scores on false negatives. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1536/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1535 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1535/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1535/comments | https://api.github.com/repos/huggingface/transformers/issues/1535/events | https://github.com/huggingface/transformers/issues/1535 | 507,820,870 | MDU6SXNzdWU1MDc4MjA4NzA= | 1,535 | Why the output of DistilBertModel is inconsistent with BertModel?! | {
"login": "amirj",
"id": 1645137,
"node_id": "MDQ6VXNlcjE2NDUxMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1645137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amirj",
"html_url": "https://github.com/amirj",
"followers_url": "https://api.github.com/users/amirj/followers",
"following_url": "https://api.github.com/users/amirj/following{/other_user}",
"gists_url": "https://api.github.com/users/amirj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amirj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amirj/subscriptions",
"organizations_url": "https://api.github.com/users/amirj/orgs",
"repos_url": "https://api.github.com/users/amirj/repos",
"events_url": "https://api.github.com/users/amirj/events{/privacy}",
"received_events_url": "https://api.github.com/users/amirj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello @amirj,\r\n\r\nThe \"pooled_output\" is the hidden state of the `[CLS]`. It is this hidden state that is used for classification tasks for instance (see DistilBertForSequenceClassification). So you could retrieve it by filtering out `hidden_states `.\r\n\r\nThe reason why there is no linear transformation in the pooler of DistilBERT is because I'm removing the next sentence prediction (see RoBERTa which also do the same). However, this linear transformation is still in `DistilBertForSequenceClassification` so that the classification heads for DistilBERT and BERT have the same number of parameters.\r\n\r\nI hope it answers your question.\r\nVictor",
"Hello @VictorSanh,\r\nDistilBertForSequenceClassification is the rescue.\r\nThanks.\r\nAmir",
"def forward(self, input_ids, attention_mask, labels=None):\r\n output = self.bert(input_ids, attention_mask = attention_mask)\r\n output = self.classifier(output.hidden_states) \r\n output = torch.sigmoid(output) \r\n loss = 0\r\n if labels is not None:\r\n loss = self.criterion(output, labels)\r\n return loss, output\r\n\r\nfor BERT model self.classifier will take output.pooler_output as its input\r\nBut for DistilBERT this doesnt happen.\r\nI used \"DistilBertForSequenceClassification \" and replaced pooler_output with hidden_states, still it doesnt work\r\n\r\nError I get is: linear(): argument 'input' (position 1) must be Tensor, not NoneType"
] | 1,571 | 1,644 | 1,571 | NONE | null | [The output of DistilBertModel](https://github.com/huggingface/transformers/blob/be916cb3fb4579e278ceeaec11a6524662797d7f/transformers/modeling_distilbert.py#L468) does not contain [pooled_output as available in BERT model](https://github.com/huggingface/transformers/blob/be916cb3fb4579e278ceeaec11a6524662797d7f/transformers/modeling_bert.py#L632).
I'm going to replace BERT with DistilBert in a classification task, if pooled_output is not a proper way in DistillBertModel? -Currently, I'm using pooled_output of BERT in my experiments. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1535/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1534 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1534/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1534/comments | https://api.github.com/repos/huggingface/transformers/issues/1534/events | https://github.com/huggingface/transformers/issues/1534 | 507,798,363 | MDU6SXNzdWU1MDc3OTgzNjM= | 1,534 | run_ner.py file with Distill Bert | {
"login": "amankedia",
"id": 8494998,
"node_id": "MDQ6VXNlcjg0OTQ5OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8494998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amankedia",
"html_url": "https://github.com/amankedia",
"followers_url": "https://api.github.com/users/amankedia/followers",
"following_url": "https://api.github.com/users/amankedia/following{/other_user}",
"gists_url": "https://api.github.com/users/amankedia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amankedia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amankedia/subscriptions",
"organizations_url": "https://api.github.com/users/amankedia/orgs",
"repos_url": "https://api.github.com/users/amankedia/repos",
"events_url": "https://api.github.com/users/amankedia/events{/privacy}",
"received_events_url": "https://api.github.com/users/amankedia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have the same issue. Did you end up using Distilbert?",
"Not sure how well it will perform. Casing is an important feature used in many NER tasks. So I would say it _could_ work, but ymmv. For reference: https://stackoverflow.com/questions/56384231/case-sensitive-entity-recognition",
"RoBERTa is cased so you guys can try using DistilRoBERTa, released today by @VictorSanh:\r\n\r\n```\r\n--model_name_or_path distilroberta-base\r\n```\r\n\r\nYou'll probably need to adapt run_ner.py (PR welcome)",
"Actually working on that today so I'll let you know how it goes. ",
"This is actually really cool. I was looking today at the models and had no idea DistilRoBERTa was released today. Awesome @VictorSanh!",
"@amankedia I think this issue is resolved? If so we can resolve it :)",
"Solved indeed. Thanks everyone for contributing!"
] | 1,571 | 1,572 | 1,572 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I wish to use the Distill Bert model for NER. I am not sure if it will work with it directly. Any suggestions on that end would be great.
Also, what values should the parameters **--model_type** and **--model_name_or_path** take for Distill Bert?
Other parameters per my understanding would be the same. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1534/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1533 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1533/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1533/comments | https://api.github.com/repos/huggingface/transformers/issues/1533/events | https://github.com/huggingface/transformers/issues/1533 | 507,793,041 | MDU6SXNzdWU1MDc3OTMwNDE= | 1,533 | Add vocabulary gives sequence length warning | {
"login": "callenilsson",
"id": 16915094,
"node_id": "MDQ6VXNlcjE2OTE1MDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/16915094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/callenilsson",
"html_url": "https://github.com/callenilsson",
"followers_url": "https://api.github.com/users/callenilsson/followers",
"following_url": "https://api.github.com/users/callenilsson/following{/other_user}",
"gists_url": "https://api.github.com/users/callenilsson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/callenilsson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/callenilsson/subscriptions",
"organizations_url": "https://api.github.com/users/callenilsson/orgs",
"repos_url": "https://api.github.com/users/callenilsson/repos",
"events_url": "https://api.github.com/users/callenilsson/events{/privacy}",
"received_events_url": "https://api.github.com/users/callenilsson/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, this warning means that the sequence you have encoded is longer than the maximum sequence length the model can handle. It isn't related to the tokens you have added.\r\n\r\nRoBERTa can only handle sequences of a maximum of 512 tokens, so you should make sure you only pass sequences of a max length of 512 or else it will crash. You can truncate your sequence so that it fits, or you can use another model that can accept longer sequences.",
"> Hi, this warning means that the sequence you have encoded is longer than the maximum sequence length the model can handle. It isn't related to the tokens you have added.\r\n> \r\n> RoBERTa can only handle sequences of a maximum of 512 tokens, so you should make sure you only pass sequences of a max length of 512 or else it will crash. You can truncate your sequence so that it fits, or you can use another model that can accept longer sequences.\r\n\r\nThanks! My bad, completely misunderstood the warning, all fixed. However, my problem now seems to be that the function `tokenizer.encode_plus()` used in `glue_convert_examples_to_features()` gets exponentially slower the more words I add to the tokenizer's vocabulary.\r\n\r\nFor example, starting with a tokenizer vocabulary size of 50265, the `tokenizer.encode_plus()` takes ~0.00048 sec per call. If I add 1200 more words, giving me a tokenizer vocabulary size of 51465, the `tokenizer.encode_plus()` now takes ~0.05729 sec per call, which is ~120x slower. It gets even worse the more words I add, causing me waiting times up to 1h just to pre-process the dataset. What causes this exponential (or extreme linear) growth to happen? Is it possible to optimize it?",
"> > Hi, this warning means that the sequence you have encoded is longer than the maximum sequence length the model can handle. It isn't related to the tokens you have added.\r\n> > RoBERTa can only handle sequences of a maximum of 512 tokens, so you should make sure you only pass sequences of a max length of 512 or else it will crash. You can truncate your sequence so that it fits, or you can use another model that can accept longer sequences.\r\n> \r\n> Thanks! My bad, completely misunderstood the warning, all fixed. However, my problem now seems to be that the function `tokenizer.encode_plus()` used in `glue_convert_examples_to_features()` gets exponentially slower the more words I add to the tokenizer's vocabulary.\r\n> \r\n> For example, starting with a tokenizer vocabulary size of 50265, the `tokenizer.encode_plus()` takes ~0.00048 sec per call. If I add 1200 more words, giving me a tokenizer vocabulary size of 51465, the `tokenizer.encode_plus()` now takes ~0.05729 sec per call, which is ~120x slower. It gets even worse the more words I add, causing me waiting times up to 1h just to pre-process the dataset. What causes this exponential (or extreme linear) growth to happen? Is it possible to optimize it?\r\n\r\nPlease see https://github.com/huggingface/transformers/issues/1830 , https://github.com/huggingface/transformers/issues/1621 and https://github.com/huggingface/transformers/pull/1881",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,581 | 1,581 | NONE | null | ## ❓ Questions & Help
I'm trying to add extra vocabulary to RoBERTa using the `tokenizer.add_tokens()` function. However, when training I get the following warning message:
`WARNING - transformers.tokenization_utils - Token indices sequence length is longer than the specified maximum sequence length for this model (751 > 512). Running this sequence through the model will result in indexing errors`
What's going on here? Should I be concerned about this or should I ignore it? The function that calls this error is `tokenizer.convert_tokens_to_ids()`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1533/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1532 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1532/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1532/comments | https://api.github.com/repos/huggingface/transformers/issues/1532/events | https://github.com/huggingface/transformers/issues/1532 | 507,709,752 | MDU6SXNzdWU1MDc3MDk3NTI= | 1,532 | 'BertForSequenceClassification' is not defined 'DUMMY_INPUTS' is not defined | {
"login": "roccqqck",
"id": 34628766,
"node_id": "MDQ6VXNlcjM0NjI4NzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/34628766?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roccqqck",
"html_url": "https://github.com/roccqqck",
"followers_url": "https://api.github.com/users/roccqqck/followers",
"following_url": "https://api.github.com/users/roccqqck/following{/other_user}",
"gists_url": "https://api.github.com/users/roccqqck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roccqqck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roccqqck/subscriptions",
"organizations_url": "https://api.github.com/users/roccqqck/orgs",
"repos_url": "https://api.github.com/users/roccqqck/repos",
"events_url": "https://api.github.com/users/roccqqck/events{/privacy}",
"received_events_url": "https://api.github.com/users/roccqqck/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"next time i \r\n```\r\nimport torch\r\n```\r\nit showed\r\n```\r\n---------------------------------------------------------------------------\r\nNameError Traceback (most recent call last)\r\n<ipython-input-14-71a5c3f94250> in <module>\r\n----> 1 pytorch_model = BertForSequenceClassification.from_pretrained('model', from_tf=True)\r\n\r\n~/miniconda3/envs/tfenv2/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 357 try:\r\n 358 from transformers import load_tf2_checkpoint_in_pytorch_model\r\n--> 359 model = load_tf2_checkpoint_in_pytorch_model(model, resolved_archive_file, allow_missing_keys=True)\r\n 360 except ImportError as e:\r\n 361 logger.error(\"Loading a TensorFlow model in PyTorch, requires both PyTorch and TensorFlow to be installed. Please see \"\r\n\r\n~/miniconda3/envs/tfenv2/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py in load_tf2_checkpoint_in_pytorch_model(pt_model, tf_checkpoint_path, tf_inputs, allow_missing_keys)\r\n 199 \r\n 200 if tf_inputs is None:\r\n--> 201 tf_inputs = tf.constant(DUMMY_INPUTS)\r\n 202 \r\n 203 if tf_inputs is not None:\r\n\r\nNameError: name 'DUMMY_INPUTS' is not defined\r\n```",
"That's right, you can't import the PyTorch models if you don't have PyTorch installed in your environment. The `DUMMY_INPUTS` is a bug that was fixed with #1509. Could you please install it from source and let me know if you still have the error?",
"> Indeed, you can't import the PyTorch models if you don't have PyTorch installed in your environment. The `DUMMY_INPUTS` indeed is a bug that was fixed with #1509. Could you please install it from source and let me know if you still have the error?\r\n\r\ndoes```pip install https://github.com/huggingface/transformers``` mean install from source?",
"I believe the correct way would be to specify it is a git url: \r\n\r\n`pip install git+https://github.com/huggingface/transformers.git`",
"> I believe the correct way would be to specify it is a git url:\r\n> \r\n> `pip install git+https://github.com/huggingface/transformers.git`\r\n\r\nit worked, but another issues showed up\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-16-5f3cd63765b9> in <module>\r\n 6 inputs_2 = tokenizer.encode_plus(sentence_0, sentence_2, add_special_tokens=True, return_tensors='pt')\r\n 7 \r\n----> 8 pred_1 = pytorch_model(**inputs_1)[0].argmax().item()\r\n 9 pred_2 = pytorch_model(**inputs_2)[0].argmax().item()\r\n 10 print(\"sentence_1 is\", \"a paraphrase\" if pred_1 else \"not a paraphrase\", \"of sentence_0\")\r\n\r\n~/miniconda3/envs/tfenv2/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 539 result = self._slow_forward(*input, **kwargs)\r\n 540 else:\r\n--> 541 result = self.forward(*input, **kwargs)\r\n 542 for hook in self._forward_hooks.values():\r\n 543 hook_result = hook(self, input, result)\r\n\r\nTypeError: forward() got an unexpected keyword argument 'special_tokens_mask'\r\n```",
"Indeed, this is a bug, it seems the readme is not up-to-date since we added the `special_tokens_mask` in 2.1. Thank you for reporting it!\r\n\r\nIf you add the two lines mentioned below, it should work:\r\n\r\n```py\r\ninputs_1 = tokenizer.encode_plus(sentence_0, sentence_1, add_special_tokens=True, return_tensors='pt')\r\ninputs_2 = tokenizer.encode_plus(sentence_0, sentence_2, add_special_tokens=True, return_tensors='pt')\r\n\r\ndel inputs_1[\"special_tokens_mask\"] # <---- add this\r\ndel inputs_2[\"special_tokens_mask\"] # <---- add this\r\n\r\npred_1 = pytorch_model(**inputs_1)[0].argmax().item()\r\npred_2 = pytorch_model(**inputs_2)[0].argmax().item()\r\nprint(\"sentence_1 is\", \"a paraphrase\" if pred_1 else \"not a paraphrase\", \"of sentence_0\")\r\nprint(\"sentence_2 is\", \"a paraphrase\" if pred_2 else \"not a paraphrase\", \"of sentence_0\")\r\n```",
"> Indeed, this is a bug, it seems the readme is not up-to-date since we added the `special_tokens_mask` in 2.1. Thank you for reporting it!\r\n> \r\n> If you add the two lines mentioned below, it should work:\r\n> \r\n> ```python\r\n> inputs_1 = tokenizer.encode_plus(sentence_0, sentence_1, add_special_tokens=True, return_tensors='pt')\r\n> inputs_2 = tokenizer.encode_plus(sentence_0, sentence_2, add_special_tokens=True, return_tensors='pt')\r\n> \r\n> del inputs_1[\"special_tokens_mask\"] # <---- add this\r\n> del inputs_2[\"special_tokens_mask\"] # <---- add this\r\n> \r\n> pred_1 = pytorch_model(**inputs_1)[0].argmax().item()\r\n> pred_2 = pytorch_model(**inputs_2)[0].argmax().item()\r\n> print(\"sentence_1 is\", \"a paraphrase\" if pred_1 else \"not a paraphrase\", \"of sentence_0\")\r\n> print(\"sentence_2 is\", \"a paraphrase\" if pred_2 else \"not a paraphrase\", \"of sentence_0\")\r\n> ```\r\n\r\nthanks it worked!!",
"We updated the README accordingly, feel free to open other issues if you encounter other bugs.",
"@LysandreJik \r\n\r\ni got a similar error when i run `run_tf_glue.py`.\r\n\r\n```\r\n...\r\nDataset glue downloaded and prepared to /root/tensorflow_datasets/glue/mrpc/0.0.2. Subsequent calls will reuse this data.\r\nINFO:absl:Constructing tf.data.Dataset for split None, from /root/tensorflow_datasets/glue/mrpc/0.0.2\r\nTrain for 114 steps, validate for 6 steps\r\nEpoch 1/2\r\n114/114 [==============================] - 69s 601ms/step - loss: 0.5447 - accuracy: 0.7314 - val_loss: 0.4515 - val_accuracy: 0.7943\r\nEpoch 2/2\r\n114/114 [==============================] - 35s 306ms/step - loss: 0.2919 - accuracy: 0.8872 - val_loss: 0.4064 - val_accuracy: 0.8542\r\nTraceback (most recent call last):\r\n File \"run_tf_glue.py\", line 51, in <module>\r\n pytorch_model = BertForSequenceClassification.from_pretrained('./save/', from_tf=True)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py\", line 359, in from_pretrained\r\n model = load_tf2_checkpoint_in_pytorch_model(model, resolved_archive_file, allow_missing_keys=True)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_pytorch_utils.py\", line 201, in load_tf2_checkpoint_in_pytorch_model\r\n tf_inputs = tf.constant(DUMMY_INPUTS)\r\nNameError: name 'DUMMY_INPUTS' is not defined\r\n```\r\n\r\ntf version : 2.0 ( via pip )\r\ntorch version : 1.2.0 (via pip )",
"Have you tried installing from source, as was mentioned in the comment before yourts? `pip install git+https://github.com/huggingface/transformers.git`",
"@LysandreJik \r\n\r\ni got another error ;;\r\n```\r\n$ pip3 install git+https://github.com/huggingface/transformers.git --upgrade\r\n$ python run_tf_glue.py\r\n...\r\nTrain for 114 steps, validate for 6 steps\r\nEpoch 1/2\r\n114/114 [==============================] - 60s 525ms/step - loss: 0.5817 - accuracy: 0.6911 - val_loss: 0.3961 - val_accuracy: 0.8229\r\nEpoch 2/2\r\n114/114 [==============================] - 34s 300ms/step - loss: 0.3505 - accuracy: 0.8460 - val_loss: 0.3403 - val_accuracy: 0.8516\r\nTraceback (most recent call last):\r\n File \"run_tf_glue.py\", line 60, in <module>\r\n pred_1 = pytorch_model(**inputs_1)[0].argmax().item()\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 547, in __call__\r\n result = self.forward(*input, **kwargs)\r\nTypeError: forward() got an unexpected keyword argument 'special_tokens_mask'\r\n```",
"try importing \r\n\r\nfrom transformers.modeling_tf_bert import TFBertForSequenceClassification\r\n\r\n",
"> try importing\r\n> \r\n> from transformers.modeling_tf_bert import TFBertForSequenceClassification\r\n\r\nIt worked! thanks.",
"Has transformers.modeling_tf_bert been changed? I tried it and got:\r\n\r\n> ModuleNotFoundError: No module named 'transformers.modeling_tf_bert'\r\n\r\neven though I've successfully imported transformers\r\n\r\nwhat is the proper call to import BertForSequenceClassification?",
"To import `BertForSequenceClassification` (you need to have PyTorch installed), \r\n\r\n```py\r\nfrom transformers import BertForSequenceClassification\r\n```\r\n\r\nTo import `TFBertForSequenceClassification` (you need to have TensorFlow installed):\r\n\r\n```py\r\nfrom transformers import TFBertForSequenceClassification\r\n```"
] | 1,571 | 1,614 | 1,571 | NONE | null | transformers-2.1.1
https://github.com/huggingface/transformers#quick-tour-tf-20-training-and-pytorch-interoperability
i just copied and paste and run the code.
it showed
```
NameError: name 'BertForSequenceClassification' is not defined
```
i can't even
```
from transformers import BertForSequenceClassification
```
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-27-7a027f32a339> in <module>
----> 1 from transformers import BertForSequenceClassification
ImportError: cannot import name 'BertForSequenceClassification' from 'transformers'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1532/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1531 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1531/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1531/comments | https://api.github.com/repos/huggingface/transformers/issues/1531/events | https://github.com/huggingface/transformers/issues/1531 | 507,635,191 | MDU6SXNzdWU1MDc2MzUxOTE= | 1,531 | why xlnet requires a long prompt for short inputs while Bert does not ? | {
"login": "muiPomeranian",
"id": 29085131,
"node_id": "MDQ6VXNlcjI5MDg1MTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/29085131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muiPomeranian",
"html_url": "https://github.com/muiPomeranian",
"followers_url": "https://api.github.com/users/muiPomeranian/followers",
"following_url": "https://api.github.com/users/muiPomeranian/following{/other_user}",
"gists_url": "https://api.github.com/users/muiPomeranian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muiPomeranian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muiPomeranian/subscriptions",
"organizations_url": "https://api.github.com/users/muiPomeranian/orgs",
"repos_url": "https://api.github.com/users/muiPomeranian/repos",
"events_url": "https://api.github.com/users/muiPomeranian/events{/privacy}",
"received_events_url": "https://api.github.com/users/muiPomeranian/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,576 | 1,576 | NONE | null | hey guys,
Q1)
can someone give some more insight what @thomwolf explaining about?
'''
#846
The main reason you get bad performance is that XLNet is not good on short inputs (comes from the way it is pretrained, always having a long memory and only guessing a few words in the sequence).
The run_generation example here will show you how to get better performances by adding a random text as initiator.
Aman Rusia also wrote a blog post about that here. We are using his solution in the run_generation example.
'''
I can't understand the difference the way both Bert and XLnetLM works for LMhead task.
Aren't both model having disadvantages if they have short sentence?
It seems he said XLnet has huge disadvantage on short input sentence
while Bert does not(or has less disadvantage). Any detail explanation could be useful !
Q2)
Also, I can't get the point of adding extra padding or adding random padding things to improve XLnetLMHead model. Any snippet or explanation could be appreciated too...(saw the link but could not fully understood). I experimented by just adding extra strings of line:'I believe my sister is because she is a blonde ' + ' ' and it gives much better result than not having at the end....
Q3)
#846 (comment)
Lastly, why do we have better result when we don't use perm_mask ? above link response shows that
not having perm_mask option does give at least better result...But isn't perm_mask supposed to help to get better prediction and what author of paper used for SOTA ?
isn't perm_mask allow model to not seeing the next tokens in the given input while can see the previous tokens? According to the paper and the original code, I could see that if permute order is 3->4->1->2, mask=1,3, then model cannot see masked<1> when it tried to predict masked<3> but the reverse is possible.
Many thanks in advance ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1531/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1531/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1530 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1530/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1530/comments | https://api.github.com/repos/huggingface/transformers/issues/1530/events | https://github.com/huggingface/transformers/issues/1530 | 507,582,508 | MDU6SXNzdWU1MDc1ODI1MDg= | 1,530 | Plan to support UniLM ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is on our mid-term roadmap.\r\nWe have a project adding Seq2seq models and UniLM will be part of this project."
] | 1,571 | 1,571 | 1,571 | CONTRIBUTOR | null | # 🌟New model addition
## Model description
**UniLM** : Pre-trained transformer for sequence to sequence generation.
Paper : https://arxiv.org/pdf/1905.03197.pdf
## Open Source status
* [x] the model implementation is available: **[official Pytorch](https://github.com/microsoft/unilm)**
* [x] the model weights are available: For now only english : [UniLMv1-large-cased](https://github.com/microsoft/unilm#pre-trained-models)
## Additional context
The official implementation is based on a modified version of this repository (version 0.4.0).
That would be nice to have a unified API :)
*Note : They didn't release the code for pretraining yet.* | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1530/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1530/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1529 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1529/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1529/comments | https://api.github.com/repos/huggingface/transformers/issues/1529/events | https://github.com/huggingface/transformers/issues/1529 | 507,567,684 | MDU6SXNzdWU1MDc1Njc2ODQ= | 1,529 | Hight CPU and low GPU on XLNet | {
"login": "laiviet",
"id": 29224591,
"node_id": "MDQ6VXNlcjI5MjI0NTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/29224591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laiviet",
"html_url": "https://github.com/laiviet",
"followers_url": "https://api.github.com/users/laiviet/followers",
"following_url": "https://api.github.com/users/laiviet/following{/other_user}",
"gists_url": "https://api.github.com/users/laiviet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laiviet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laiviet/subscriptions",
"organizations_url": "https://api.github.com/users/laiviet/orgs",
"repos_url": "https://api.github.com/users/laiviet/repos",
"events_url": "https://api.github.com/users/laiviet/events{/privacy}",
"received_events_url": "https://api.github.com/users/laiviet/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I also meet the above problem with XLNet. The GPU usage is very low and unstable, but the CPU usage is very high. The running speed is very low.\r\n\r\nAre there any ops running on CPU rather than GPU in your XLNet implementation? How to improve the GPU usage and speed up the running speed ? Thanks!\r\n\r\nEnvironment:\r\nPytorch: 1.1.0\r\nGPU: V100, 16G\r\npytorch_transformers: 1.2.0\r\nOS: centos 7.6\r\nPython: 3.6",
"Does it occur while training or predicting? Are you sure your gpu is available (to PyTorch or TensorFlow)? What do logs say? ",
"It is likely that this is caused by the tokenisation rather than the model training. Tokenisation typically happens on the CPU, the token_ids are then transferred to the GPU and from then on out the training happens on the GPU. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"The same problem. How to solve it?"
] | 1,571 | 1,592 | 1,585 | NONE | null | ## 🐛 Bug
I am running Bert, GPT, GPT2, XLNET. I got very high CPU usage (e.g. 16 cores) with XLNet while the others (Bert, GPT, GPT2) dont.
For BERT, GPT, GPT2: CPU 1 cores, 100%GPU
For XLNet: CPU 16 cores, 50 to 60% GPU
Is there any hidden implementation which requires CPU?
<!-- Important information -->
Model I am using (XLNet....): XLNET
Language I am using the model on: English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Ubuntu 18.04
* Python version: 3.6
* PyTorch version: 1.0
* PyTorch Transformers version (or branch): pytorch_transformers 1.2.0
* Using GPU ? RTX 2080 TI
* Distributed of parallel setup ? No
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1529/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1529/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1528 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1528/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1528/comments | https://api.github.com/repos/huggingface/transformers/issues/1528/events | https://github.com/huggingface/transformers/issues/1528 | 507,503,848 | MDU6SXNzdWU1MDc1MDM4NDg= | 1,528 | Question about hidden states in GPT2 | {
"login": "weiguowilliam",
"id": 31396452,
"node_id": "MDQ6VXNlcjMxMzk2NDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/31396452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weiguowilliam",
"html_url": "https://github.com/weiguowilliam",
"followers_url": "https://api.github.com/users/weiguowilliam/followers",
"following_url": "https://api.github.com/users/weiguowilliam/following{/other_user}",
"gists_url": "https://api.github.com/users/weiguowilliam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weiguowilliam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weiguowilliam/subscriptions",
"organizations_url": "https://api.github.com/users/weiguowilliam/orgs",
"repos_url": "https://api.github.com/users/weiguowilliam/repos",
"events_url": "https://api.github.com/users/weiguowilliam/events{/privacy}",
"received_events_url": "https://api.github.com/users/weiguowilliam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! The vector of the `hidden_states` is indeed of shape `(13, seq_len, 768)`. The first value (`hidden_states[0]`), of shape `(seq_len, 768)` corresponds to the sum of the word + positional embeddings. The subsequent values are added every time the model goes through an attention layer.\r\n\r\nWithout taking into account the dropout, you would therefore have:\r\n\r\n```\r\nhidden_states[0] | 0 -> word_embeddings(inputs) + positional_embeds(outputs)\r\nhidden_states[1] | 1 -> first_attention_layer(0)\r\nhidden_states[2] | 2 -> second_attention_layer(1)\r\n...\r\n```\r\n\r\nIf by top layer you mean first attention layer of the model, then it would be `hidden_states[1]`. If by top you mean last, it would be `hidden_states[12]`, which would be the same as `outputs[0] `.\r\n\r\nThe size of those is of `(13, seq_len, 768)` and not `(13, 1, 768)` because the model computes every token and not only the last token.\r\n",
"> The size of those is of `(13, seq_len, 768)` and not `(13, 1, 768)` because the model computes every token and not only the last token.\r\n\r\nHi! Thank you for your reply. I wonder if the states for the previous token will be used for calculating the attention when predicting the later token? Is that the reason that you store the states for the previous tokens?\r\n",
"The models keep the key-value pairs so that they're not recomputed on the next model pass. These are stored in the `past`, and can reduce the amount of computing for each following model pass if you pass them to the next forward pass (like we do in run_generation).\r\n\r\nThe hidden states won't be used for this though, but you can use them to extract intermediate features from the transformer.",
"> The models keep the key-value pairs so that they're not recomputed on the next model pass. These are stored in the `past`, and can reduce the amount of computing for each following model pass if you pass them to the next forward pass (like we do in run_generation).\r\n> \r\n> The hidden states won't be used for this though, but you can use them to extract intermediate features from the transformer.\r\n\r\nHi! Thank you for your reply. That really helps.\r\n\r\nSo now I want to make sure that in the code block in question:\r\nSince hidden_states[12] is for the top layer, then I extract hidden_states[12][0][5], whose size is 768. Is it the vector for prediction based on the word \"cute\" (and all previous 5 words)?\r\n\r\n",
"Yes, you're right. You could also retrieve this vector by using a `GPT2Model` instead of a `GPT2LMHeadModel`, which is the base transformer: \r\n\r\n```py\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel, GPT2Model\r\nimport torch\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\n\r\nlm_model = GPT2LMHeadModel.from_pretrained(\"gpt2\", output_hidden_states=True)\r\nlm_model.eval()\r\n\r\nmodel = GPT2Model.from_pretrained('gpt2')\r\nmodel.eval()\r\n\r\ninput_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\")).unsqueeze(0) # Batch size 1\r\n\r\noutputs = model(input_ids)\r\nlm_outputs = lm_model(input_ids, labels=input_ids)\r\n\r\ntransformer_output = outputs[0]\r\ntransformer_hidden_states = lm_outputs[3]\r\n\r\nprint(transformer_hidden_states[12][:, -1, :] - transformer_output[:, -1, :])\r\n```\r\n\r\nThis should output a tensor of 0s as the two tensors are equal.",
"@LysandreJik Thank you so much for your help! That works."
] | 1,571 | 1,572 | 1,572 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
>tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
>model = GPT2LMHeadModel.from_pretrained('gpt2',output_hidden_states=True)
>model.eval()
>input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
>outputs = model(input_ids, labels=input_ids)
>hidden_states = outputs[3]
Here the shape of hidden_states is (13,6,768). I have 2 questions.
1. Which one is the vector for the top layer, hidden_states[0] or hidden_states[12]?
2. Suppose hidden_states[12] is for the top layer, then I extract hidden_states[12][0][0], whose size is 768. Is it the vector for prediction based on the word "hello"? But since I know the next word is ",", why do I need hidden_states[12][0][0]? In my opinion, the shape of hidden_states should be (13,1,768), which is only used for predicting next word after "cute". I'm quite confused of the "6" here.
Please help me with the questions. Thank you in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1528/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1527 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1527/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1527/comments | https://api.github.com/repos/huggingface/transformers/issues/1527/events | https://github.com/huggingface/transformers/issues/1527 | 507,407,932 | MDU6SXNzdWU1MDc0MDc5MzI= | 1,527 | Training GPT or GPT-2 from scratch | {
"login": "LeonCrashCode",
"id": 5652525,
"node_id": "MDQ6VXNlcjU2NTI1MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5652525?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeonCrashCode",
"html_url": "https://github.com/LeonCrashCode",
"followers_url": "https://api.github.com/users/LeonCrashCode/followers",
"following_url": "https://api.github.com/users/LeonCrashCode/following{/other_user}",
"gists_url": "https://api.github.com/users/LeonCrashCode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeonCrashCode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeonCrashCode/subscriptions",
"organizations_url": "https://api.github.com/users/LeonCrashCode/orgs",
"repos_url": "https://api.github.com/users/LeonCrashCode/repos",
"events_url": "https://api.github.com/users/LeonCrashCode/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeonCrashCode/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I think, just create an instance of the model (without loading from pretrained one), switch it to train mode and run. That's all.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I am currently trying to implement this as well. Once I figured it out i'll let you know !"
] | 1,571 | 1,585 | 1,581 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I am trying to retrain GPT or GPT-2 from scratch, is there any implementation for this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1527/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1527/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1526 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1526/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1526/comments | https://api.github.com/repos/huggingface/transformers/issues/1526/events | https://github.com/huggingface/transformers/issues/1526 | 507,341,324 | MDU6SXNzdWU1MDczNDEzMjQ= | 1,526 | Alignment of tokens - 'extract_features_aligned_to_words' from fairseq roberta? | {
"login": "sidsvash26",
"id": 10222970,
"node_id": "MDQ6VXNlcjEwMjIyOTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/10222970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sidsvash26",
"html_url": "https://github.com/sidsvash26",
"followers_url": "https://api.github.com/users/sidsvash26/followers",
"following_url": "https://api.github.com/users/sidsvash26/following{/other_user}",
"gists_url": "https://api.github.com/users/sidsvash26/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sidsvash26/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sidsvash26/subscriptions",
"organizations_url": "https://api.github.com/users/sidsvash26/orgs",
"repos_url": "https://api.github.com/users/sidsvash26/repos",
"events_url": "https://api.github.com/users/sidsvash26/events{/privacy}",
"received_events_url": "https://api.github.com/users/sidsvash26/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"It seems this is still unsupported by Huggingface?"
] | 1,571 | 1,605 | 1,576 | NONE | null | ## ❓ Questions & Help
I'm using RoBERTa pretrained model to get embeddings for a dataset. But I want to get the embeddings as per the tokenization which is already present in my dataset. So basically I would want to average the embeddings of a token's BPE if that token in my dataset is getting split into different BPEs. fairseq roberta has a method for this as follows:
```
import torch
from fairseq.models.roberta import alignment_utils
roberta = torch.hub.load('pytorch/fairseq', 'roberta.large')
roberta.eval()
example_string_tokens = ['Dr', 'Greenwalt', 'fixed', 'my', 'neck', 'from', 'a', 'snowboard', 'injury', 'and', 'was', 'way', 'more', 'effective', 'that', 'a', 'regular', 'doctor', '.']
doc = roberta.extract_features_aligned_to_words(" ".join(example_string_tokens))
for tok in doc:
print('{:10}{} (...)'.format(str(tok), tok.vector[:5]))
```
The output is:
```
<s> tensor([-0.0656, 0.0189, -0.0003, -0.0907, 0.0550], grad_fn=<SliceBackward>) (...)
Dr tensor([ 0.2180, -0.0530, -0.3689, -0.0619, -0.6243], grad_fn=<SliceBackward>) (...)
Greenwalt tensor([ 0.3744, 0.0741, -0.7149, 0.0654, -0.1234], grad_fn=<SliceBackward>) (...)
fixed tensor([ 0.2132, 0.0841, -0.2535, -0.1404, -0.0060], grad_fn=<SliceBackward>) (...)
my tensor([ 0.1313, -0.0466, -0.1373, 0.1730, 0.1771], grad_fn=<SliceBackward>) (...)
neck tensor([ 0.0674, -0.3413, -0.0192, 0.0290, -0.3497], grad_fn=<SliceBackward>) (...)
from tensor([-0.0301, -0.3562, -0.3798, 0.0687, 0.0290], grad_fn=<SliceBackward>) (...)
a tensor([-0.2496, -0.1036, 0.0270, -0.0819, -0.2146], grad_fn=<SliceBackward>) (...)
snowboard tensor([ 0.4018, 0.1432, -0.0499, 0.2095, -0.0520], grad_fn=<SliceBackward>) (...)
injury tensor([ 0.0010, -0.6273, -0.0312, -0.1957, -0.4832], grad_fn=<SliceBackward>) (...)
and tensor([ 0.0747, -0.3335, -0.0593, -0.3805, 0.0930], grad_fn=<SliceBackward>) (...)
was tensor([ 0.1501, -0.1334, -0.4789, -0.1974, -0.3096], grad_fn=<SliceBackward>) (...)
way tensor([-0.2803, 0.3204, -0.1663, -0.4420, -0.2641], grad_fn=<SliceBackward>) (...)
more tensor([-0.1037, 0.1878, -0.5839, -0.4437, -0.1683], grad_fn=<SliceBackward>) (...)
effective tensor([-0.1794, 0.2419, -0.3182, -0.2252, -0.1534], grad_fn=<SliceBackward>) (...)
that tensor([-0.1146, -0.1935, -0.3615, -0.4998, -0.1000], grad_fn=<SliceBackward>) (...)
a tensor([-0.2107, -0.2103, -0.1996, 0.0046, -0.1112], grad_fn=<SliceBackward>) (...)
regular tensor([ 0.2236, -0.0613, -0.5496, -0.3562, 0.1022], grad_fn=<SliceBackward>) (...)
doctor tensor([ 0.1275, -0.0589, -0.0283, -0.1557, -0.9282], grad_fn=<SliceBackward>) (...)
. tensor([ 0.1765, 0.0812, -0.1684, -0.2818, 0.0134], grad_fn=<SliceBackward>) (...)
</s> tensor([-0.0409, -0.0024, 0.0107, -0.0183, -0.0479], grad_fn=<SliceBackward>) (...)
```
I want to be able to do something similar with the hugging transformers library, but I can't find any alignment methods.
Any suggestions? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1526/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1526/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1525 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1525/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1525/comments | https://api.github.com/repos/huggingface/transformers/issues/1525/events | https://github.com/huggingface/transformers/issues/1525 | 507,282,099 | MDU6SXNzdWU1MDcyODIwOTk= | 1,525 | Understanding run_glue in distributed mode | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"To understand how DDP synchronize across processes, you can read:\r\n- the official doc for DDP: https://pytorch.org/docs/stable/nn.html?highlight=distributed%20data%20parallel#torch.nn.parallel.DistributedDataParallel\r\n- this detailed blog post I did a few months ago: https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255\r\n\r\nYes, `tr_loss` is only for the first device, this is just an information to follow the training. To get the average loss over all devices, you can add a step of synchronization of losses. It's pretty simple to do, here is an example I wrote this summer for our NAACL tutorial: https://github.com/huggingface/naacl_transfer_learning_tutorial/blob/master/utils.py#L53-L59\r\n\r\nDoing evaluation on only one device is better since the total metrics are not always averages of metrics on each node. For instance F1 uses a non linear fonction of the samples and as thus can't be computed in a distributed setting just by averaging F1 on all devices.\r\n\r\nAs a general note: these points can be improved (and we are aware of that and how to do it). Not including more complexities in examples like `run_glue` is a conscious decision to keep the examples simple to understand (I already think distributed training makes them a bit complex but honestly we can't really do without it on our big models). ",
"I did indeed read through the official documentation as well as your blog post and other (official and non-official) tutorials. However, many were outdated or didn't go into details about setting up the actual training loop and all the intricacies that are involved. It often seems to be explained as \"wow, you can do so much cool stuff with this because it is _D I S T R I B U T E D_\", but then details are absolutely lacking and often you're left to figure out how things work by going through source code. I did set up initialisation and training, but then I wasn't sure how to deal with the gathered loss and validating/testing.\r\n\r\nThat being said, thank you very much for your response, this is very helpful! I had also never heard about ignite. Great. Now I'll have to refactor all my code! (Kidding, even though it might be worth my time to look into it.)\r\n\r\nClosing this. Thanks again for the information.",
"@thomwolf As a small update: I have decided that I will still use distributed testing and validating. However, the final metric (per epoch for validating) will be calculated on the collected results of all processes. In other words, after a validating iteration where the loss for all steps is saved as a tensor, gather ALL losses, and then average those. That seems like a good compromise to me. Something like this.\r\n\r\n```python\r\ndef gather_cat(x):\r\n gather = [torch.empty_like(x) for _ in range(dist.get_world_size())]\r\n dist.all_gather(gather, x)\r\n return torch.cat(gather)\r\n\r\n# ...\r\n# distributed:\r\nloss = model(...)\r\nloss = gather_cat(loss)\r\navg_loss = torch.mean(loss)\r\n# ... track avg_loss only in the first process for instance\r\n```\r\n\r\nII think this would be especially useful when you have a lot of data and you want your validation and test to run smoothly as well. If anything's wrong with approach, I'd be happy to hear about it."
] | 1,571 | 1,571 | 1,571 | COLLABORATOR | null | ## ❓ Questions & Help
In my own project I am building on top of `transformers` and I'd like to take advantage of DDP. For inspiration I've been looking at how different libraries implement that, as well as how `transformers` handles it. In particular, I've been looking at [`run_glue`](https://github.com/huggingface/transformers/blob/master/examples/run_glue.py).
I guess the main issue that I have, is that I don't quite understand how the forward/backward is synced across processes. **Does it mean that before each forward and backward pass the processes are synced, and that the backward pass averages over all gradients for all processes?**
In addition, I am a bit confused about how `run_glue` presents its results. It seems that the logger is only active for the first process
https://github.com/huggingface/transformers/blob/be916cb3fb4579e278ceeaec11a6524662797d7f/examples/run_glue.py#L453-L455
**Does this mean that `tr_loss` in the following fragment is a generalization**, i.e. it's the training loss of only the first process and NOT an average over all processes. It's to give users _some_ idea of the results, but it is not a factual representation of the full training loss, since it only represents the loss of one process?
https://github.com/huggingface/transformers/blob/be916cb3fb4579e278ceeaec11a6524662797d7f/examples/run_glue.py#L493
I noticed that you do the testing with only one device:
https://github.com/huggingface/transformers/blob/be916cb3fb4579e278ceeaec11a6524662797d7f/examples/run_glue.py#L520
**What is the reasoning behind this?** Exactly what I suggested before, namely that you do not get all results back easily when running in distributed mode? In a setting with training, validating, and testing, would you recommend that only training is done distributed, and testing and validating on one device?
Thanks in advance for your time. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1525/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1524 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1524/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1524/comments | https://api.github.com/repos/huggingface/transformers/issues/1524/events | https://github.com/huggingface/transformers/issues/1524 | 507,270,125 | MDU6SXNzdWU1MDcyNzAxMjU= | 1,524 | Question on AllenNLP vocabulary and huggingface BERT out of sync | {
"login": "pruksmhc",
"id": 10094008,
"node_id": "MDQ6VXNlcjEwMDk0MDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/10094008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pruksmhc",
"html_url": "https://github.com/pruksmhc",
"followers_url": "https://api.github.com/users/pruksmhc/followers",
"following_url": "https://api.github.com/users/pruksmhc/following{/other_user}",
"gists_url": "https://api.github.com/users/pruksmhc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pruksmhc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pruksmhc/subscriptions",
"organizations_url": "https://api.github.com/users/pruksmhc/orgs",
"repos_url": "https://api.github.com/users/pruksmhc/repos",
"events_url": "https://api.github.com/users/pruksmhc/events{/privacy}",
"received_events_url": "https://api.github.com/users/pruksmhc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,576 | 1,576 | NONE | null | Perhaps this should also be posted in the allennlp repo.
I'm currently trying to use a pretrained model (clinicalBERT) with a different set of vocabulary with huggingface's BertModel. Even though the code runs, I'm not 100% convinced that the vocabulary index to weight mappings are synced between the allennlp vocabulalry object and the BertModel. The tokenizer used for clinicalBERT is scispacy, not wordpiece. First of all, are there any examples in this repo or other places online that does this (has a stack of allennlp vocab + huggingface BERT, and loading a pretrained BERT model with vocab.txt?).
This is what I do:
` model = pytorch_transformers.BertModel.from_pretrained("my/path/to/clinicalBERT", output_hidden_states=True)`
Then, for the vocabulary, I do:
` vocab = Vocabulary(counter=None, max_vocab_size=max_v_sizes) `
` vocab.set_from_file(filename="path/to/clinicalBERT/vocab.txt", is_padded=1, namespace="scispacy")`
I then use resize_tokens on the model, and use a TokenIndexer from allennlp and index my instances with that vocabulary. Even at this point though, ` vocab.get_token_from_index(0, "scispacy")`
returns ` @@PADDING@@` , and ` @@UNKNOWN@@` is at index 101 for the vocab (even though allennlp's vocabulary to my understanding sets ` @@UNKNOWN@@` to 1).
What should I do to realign the Vocabulary back to the vocab index -> weight embedding mapping in the model?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1524/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1523 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1523/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1523/comments | https://api.github.com/repos/huggingface/transformers/issues/1523/events | https://github.com/huggingface/transformers/issues/1523 | 507,183,517 | MDU6SXNzdWU1MDcxODM1MTc= | 1,523 | Why the codes of training BERT from scratch are deprecated | {
"login": "anhnt170489",
"id": 24732444,
"node_id": "MDQ6VXNlcjI0NzMyNDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/24732444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anhnt170489",
"html_url": "https://github.com/anhnt170489",
"followers_url": "https://api.github.com/users/anhnt170489/followers",
"following_url": "https://api.github.com/users/anhnt170489/following{/other_user}",
"gists_url": "https://api.github.com/users/anhnt170489/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anhnt170489/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anhnt170489/subscriptions",
"organizations_url": "https://api.github.com/users/anhnt170489/orgs",
"repos_url": "https://api.github.com/users/anhnt170489/repos",
"events_url": "https://api.github.com/users/anhnt170489/events{/privacy}",
"received_events_url": "https://api.github.com/users/anhnt170489/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"They were community provided and the core team didn't have the bandwidth to maintain them.\r\n\r\nAlso we want to limit the number of single-model examples now and favor examples that work for a range of models.\r\n\r\nIf you want to update them to the current version of the repo and add the various models (for instance all the models currently in `run_lm_finetuning`), happy to review a PR.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,576 | 1,576 | NONE | null | ## ❓ Questions & Help
I'm wondering why the team removed the codes of training BERT from scratch, including pregenerate_training_data.py and finetune_on_pregenerated.py. They're very helpful and I still continue developing them to train the BERT as well as Roberta from scratch. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1523/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1523/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1522 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1522/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1522/comments | https://api.github.com/repos/huggingface/transformers/issues/1522/events | https://github.com/huggingface/transformers/issues/1522 | 507,116,851 | MDU6SXNzdWU1MDcxMTY4NTE= | 1,522 | When to support Albert? | {
"login": "c0derm4n",
"id": 18226382,
"node_id": "MDQ6VXNlcjE4MjI2Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/18226382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/c0derm4n",
"html_url": "https://github.com/c0derm4n",
"followers_url": "https://api.github.com/users/c0derm4n/followers",
"following_url": "https://api.github.com/users/c0derm4n/following{/other_user}",
"gists_url": "https://api.github.com/users/c0derm4n/gists{/gist_id}",
"starred_url": "https://api.github.com/users/c0derm4n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/c0derm4n/subscriptions",
"organizations_url": "https://api.github.com/users/c0derm4n/orgs",
"repos_url": "https://api.github.com/users/c0derm4n/repos",
"events_url": "https://api.github.com/users/c0derm4n/events{/privacy}",
"received_events_url": "https://api.github.com/users/c0derm4n/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please use the search function. There's an open issue about albert here: https://github.com/huggingface/transformers/issues/1370"
] | 1,571 | 1,571 | 1,571 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Do you have any plan to support Google's new model-ALBERT? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1522/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1521 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1521/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1521/comments | https://api.github.com/repos/huggingface/transformers/issues/1521/events | https://github.com/huggingface/transformers/issues/1521 | 507,080,976 | MDU6SXNzdWU1MDcwODA5NzY= | 1,521 | Downloading model in distributed mode | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This should be fixed in most of the examples through the use of `torch.distributed.barrier`.\r\nE.g. here: https://github.com/huggingface/transformers/blob/master/examples/run_glue.py#L473\r\n\r\nDon't hesitate to submit a PR if some examples don't make use of this technique yet.",
"Thanks for the quick reply! So to ensure that I understand this correctly: barrier blocks until all processes are synchronized (i.e. have reached that point). So before we enter the loading of the model, we block and only the first process continues (and downloads the model and vocab). After successfully downloading the required files, the first process also reaches barrier() and thus satisfying the need for all processes to have called the function and lifting the block. Then the other processes also continue (but find that the model has already been downloaded, so get it from cache). ",
"Yes"
] | 1,571 | 1,571 | 1,571 | COLLABORATOR | null | ## 🐛 Bug
When running in distributed mode with `n` processes, a new model will be download `n` times. I don't think that's what you want. I found [this related issue](https://github.com/huggingface/transformers/issues/44) but that only fixed the race condition. Downloads still happen in parallel. Is there a way to only download the model once? Perhaps by passing a `local_rank` parameter and only downloading when `local_rank==0`?
Especially for large models this is not ideal as i. they take up a lot of space (multiplied by the number of processes) ii. downloading is extra slow because it happens multiple times in parallel, limiting bandwidth.
```bash
15-Oct 03:08:45 - [INFO]: https://storage.googleapis.com/sf-ctrl/pytorch/seqlen256_v1.bin not found in cache or force_download set to True, downloading to /tmp/tmp0amm9x2s
15-Oct 03:08:45 - [INFO]: https://storage.googleapis.com/sf-ctrl/pytorch/seqlen256_v1.bin not found in cache or force_download set to True, downloading to /tmp/tmp7wpg48uj
15-Oct 03:08:45 - [INFO]: https://storage.googleapis.com/sf-ctrl/pytorch/seqlen256_v1.bin not found in cache or force_download set to True, downloading to /tmp/tmp89svv055
15-Oct 03:08:45 - [INFO]: https://storage.googleapis.com/sf-ctrl/pytorch/seqlen256_v1.bin not found in cache or force_download set to True, downloading to /tmp/tmp7yk94f8s
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6552025106/6552025106 [03:57<00:00, 27631147.05B/s]
15-Oct 03:12:42 - [INFO]: copying /tmp/tmp89svv055 to cache at /home/bram/.cache/torch/transformers/c146cc96724f27295a0c3ada1fbb3632074adf87e9aef8269e44c9208787f8c8.b986347cbab65fa276683efbb9c2f7ee22552277bcf6e1f1166557ed0852fdf0
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6552025106/6552025106 [03:57<00:00, 27614197.65B/s]
15-Oct 03:12:43 - [INFO]: copying /tmp/tmp7wpg48uj to cache at /home/bram/.cache/torch/transformers/c146cc96724f27295a0c3ada1fbb3632074adf87e9aef8269e44c9208787f8c8.b986347cbab65fa276683efbb9c2f7ee22552277bcf6e1f1166557ed0852fdf0
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6552025106/6552025106 [03:57<00:00, 27605553.23B/s]
15-Oct 03:12:43 - [INFO]: copying /tmp/tmp0amm9x2s to cache at /home/bram/.cache/torch/transformers/c146cc96724f27295a0c3ada1fbb3632074adf87e9aef8269e44c9208787f8c8.b986347cbab65fa276683efbb9c2f7ee22552277bcf6e1f1166557ed0852fdf0
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6552025106/6552025106 [03:57<00:00, 27599668.53B/s]
15-Oct 03:12:43 - [INFO]: copying /tmp/tmp7yk94f8s to cache at /home/bram/.cache/torch/transformers/c146cc96724f27295a0c3ada1fbb3632074adf87e9aef8269e44c9208787f8c8.b986347cbab65fa276683efbb9c2f7ee22552277bcf6e1f1166557ed0852fdf0
```
An alternative would be to already 'touch' the file in the .cache _before_ downloading, and when it exists, not initiate a new download. (Taking into account sudden abortions.)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1521/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1520 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1520/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1520/comments | https://api.github.com/repos/huggingface/transformers/issues/1520/events | https://github.com/huggingface/transformers/issues/1520 | 507,068,447 | MDU6SXNzdWU1MDcwNjg0NDc= | 1,520 | Changelog | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @BramVanroy, we detail the changes in the [\"Releases\" section](https://github.com/huggingface/transformers/releases). Are you thinking of something different?\r\n\r\nHaving a documentation per-version is on our roadmap, it should help tremendously regarding version changes.",
"Ah, I was looking inside different major version commits for some sort of changelog file - which still might be useful in itself, as you indicate. But having the Github releases is exactly what I was after! Apologies, should've thought this through."
] | 1,571 | 1,571 | 1,571 | COLLABORATOR | null | ## 🚀 Add changelog between versions
New versions are pushed to PyPi at a steady pace, but it's not evident to find the changes that new versions bring. Is there a changelog anywhere? Something similar to a HISTORY file would be nice. I think it would definitely contribute to better documentation!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1520/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1519 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1519/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1519/comments | https://api.github.com/repos/huggingface/transformers/issues/1519/events | https://github.com/huggingface/transformers/issues/1519 | 507,029,636 | MDU6SXNzdWU1MDcwMjk2MzY= | 1,519 | Accuracy drop in finetuning roBERTa | {
"login": "Arjunsankarlal",
"id": 28828445,
"node_id": "MDQ6VXNlcjI4ODI4NDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/28828445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arjunsankarlal",
"html_url": "https://github.com/Arjunsankarlal",
"followers_url": "https://api.github.com/users/Arjunsankarlal/followers",
"following_url": "https://api.github.com/users/Arjunsankarlal/following{/other_user}",
"gists_url": "https://api.github.com/users/Arjunsankarlal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arjunsankarlal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arjunsankarlal/subscriptions",
"organizations_url": "https://api.github.com/users/Arjunsankarlal/orgs",
"repos_url": "https://api.github.com/users/Arjunsankarlal/repos",
"events_url": "https://api.github.com/users/Arjunsankarlal/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arjunsankarlal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,576 | 1,576 | NONE | null | ## ❓ How to achieve GLUE leaderboard acc for QQP task trained with roBERTa?
I am trying to finetune roberta-base model for Quora Question Pair task. In the [GLUE Leaderboard](url) the accuracy claimed is F1 / Accuracy is 74.3/90.2
I am training with the following command to fine tune the roberta-base model,
> CUDA_VISIBLE_DEVICES=0 python run_glue.py --data_dir /home/arjun/datasets/quora_roberta --model_type roberta --model_name_or_path /home/arjun/transformers/models/roberta --task_name qqp --output_dir /home/arjun/transformers/output_models/run-2/ --do_train --do_eval --do_lower_case --logging_steps 250 --save_steps 5000
All other params are default params.
I get the accuracy after 3 epochs as (taken from the eval_results.txt file),
acc = 0.6329982984448578
acc_and_f1 = 0.3164991492224289
f1 = 0.0
Also I ran evaluation for all the checkpoints stored, for every checkpoint I got the same accuracy which is quite confusing.
I'm I doing something wrong here? How to reproduce the accuracy mentioned in GLUE leaderboard?
Thanks in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1519/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1518 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1518/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1518/comments | https://api.github.com/repos/huggingface/transformers/issues/1518/events | https://github.com/huggingface/transformers/issues/1518 | 506,944,288 | MDU6SXNzdWU1MDY5NDQyODg= | 1,518 | Predefined token classification | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,576 | 1,576 | NONE | null | Hello,
I am just wondering if "BertForTokenClassification" can be modified to classify predefined tokens (just targeted tokens). E.g. in NER it identifies the entity and then classify it, but in my case I want to classify the targeted tokens only in a sentence with a predefined labels. I thought of adding the targeted token in the sentence to the end of the sentence to let the model knows this is the targeted token but I am sure it is not a good idea because I need to again add a label for the added token. (silly Q I know :)
Any idea on how to approach it? I feel that NER task "BertForTokenClassification"
model can be modified to achieve it but do not know how.
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1518/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1517 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1517/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1517/comments | https://api.github.com/repos/huggingface/transformers/issues/1517/events | https://github.com/huggingface/transformers/issues/1517 | 506,913,371 | MDU6SXNzdWU1MDY5MTMzNzE= | 1,517 | Unable to import TF models | {
"login": "tylerjthomas9",
"id": 36181311,
"node_id": "MDQ6VXNlcjM2MTgxMzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/36181311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tylerjthomas9",
"html_url": "https://github.com/tylerjthomas9",
"followers_url": "https://api.github.com/users/tylerjthomas9/followers",
"following_url": "https://api.github.com/users/tylerjthomas9/following{/other_user}",
"gists_url": "https://api.github.com/users/tylerjthomas9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tylerjthomas9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tylerjthomas9/subscriptions",
"organizations_url": "https://api.github.com/users/tylerjthomas9/orgs",
"repos_url": "https://api.github.com/users/tylerjthomas9/repos",
"events_url": "https://api.github.com/users/tylerjthomas9/events{/privacy}",
"received_events_url": "https://api.github.com/users/tylerjthomas9/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you run the following and report back? It might be that you have some namespace conflict.\r\n\r\n```python\r\n! pip list | grep \"tensorflow\" # Check tensorflow==2.0.0, tensorflow-gpu==2.0.0\r\n! pip list | grep \"transformers\" # Check transformers>=2.0.0\r\n```\r\n",
"Cleaning the environment fixed the issue. You are right, there was a namespace conflict.",
"@tylerjthomas9 - I'm having the same problem. Can you elaborate on what you did to fix the namespace conflict?",
"@GrahamboJangles If you have issues with the import of tensorflow models on a blank colab notebook, please make sure you have the correct tensorflow version installed in your colab environment (2.0+). You can do so by overriding the already-installed TensorFlow with the following command:\r\n\r\n```\r\n!pip install tensorflow==2.0.0\r\n```",
"@LysandreJik - I made sure I had Tensorflow 2.0.0 and I still get the same error. \r\n```\r\n100%|██████████| 231508/231508 [00:00<00:00, 2665916.96B/s]\r\n100%|██████████| 313/313 [00:00<00:00, 195011.46B/s]\r\n100%|██████████| 440473133/440473133 [00:05<00:00, 73953508.44B/s]\r\n100%|██████████| 815973/815973 [00:00<00:00, 5548125.39B/s]\r\n100%|██████████| 458495/458495 [00:00<00:00, 3162846.19B/s]\r\nftfy or spacy is not installed using BERT BasicTokenizer instead of SpaCy & ftfy.\r\n100%|██████████| 273/273 [00:00<00:00, 154235.59B/s]\r\n100%|██████████| 478750579/478750579 [00:08<00:00, 56444018.22B/s]\r\nThis tokenizer does not make use of special tokens. Input is returned with no modification.\r\nThis tokenizer does not make use of special tokens. Input is returned with no modification.\r\nThis tokenizer does not make use of special tokens.\r\n100%|██████████| 1042301/1042301 [00:00<00:00, 7120216.12B/s]\r\n100%|██████████| 456318/456318 [00:00<00:00, 3926917.54B/s]\r\n100%|██████████| 176/176 [00:00<00:00, 110459.00B/s]\r\n100%|██████████| 548118077/548118077 [00:09<00:00, 59420986.50B/s]\r\nThis tokenizer does not make use of special tokens. Input is returned with no modification.\r\nThis tokenizer does not make use of special tokens. Input is returned with no modification.\r\nThis tokenizer does not make use of special tokens.\r\n4608350B [00:00, 42689870.73B/s]\r\n2257285B [00:00, 28527684.80B/s] \r\n100%|██████████| 611/611 [00:00<00:00, 408988.15B/s]\r\n100%|██████████| 6552025106/6552025106 [03:27<00:00, 31645156.91B/s]\r\nThis tokenizer does not make use of special tokens. Input is returned with no modification.\r\nThis tokenizer does not make use of special tokens. Input is returned with no modification.\r\nThis tokenizer does not make use of special tokens.\r\n100%|██████████| 9143613/9143613 [00:00<00:00, 29615841.04B/s]\r\n100%|██████████| 606/606 [00:00<00:00, 397210.22B/s]\r\n100%|██████████| 1140884800/1140884800 [00:21<00:00, 53037879.64B/s]\r\nThis tokenizer does not make use of special tokens. Input is returned with no modification.\r\nThis tokenizer does not make use of special tokens. Input is returned with no modification.\r\nThis tokenizer does not make use of special tokens.\r\n100%|██████████| 798011/798011 [00:00<00:00, 5526095.41B/s]\r\n100%|██████████| 641/641 [00:00<00:00, 405390.36B/s]\r\n100%|██████████| 467042463/467042463 [00:08<00:00, 52695048.04B/s]\r\n100%|██████████| 1452741/1452741 [00:00<00:00, 8067948.45B/s]\r\n100%|██████████| 1008321/1008321 [00:00<00:00, 5690556.88B/s]\r\n100%|██████████| 396/396 [00:00<00:00, 225243.34B/s]\r\n100%|██████████| 830122454/830122454 [00:24<00:00, 33868891.23B/s]\r\n100%|██████████| 492/492 [00:00<00:00, 307311.63B/s]\r\n100%|██████████| 267967963/267967963 [00:14<00:00, 18543027.08B/s]\r\n100%|██████████| 898823/898823 [00:00<00:00, 6115044.08B/s]\r\n100%|██████████| 456318/456318 [00:00<00:00, 3196420.05B/s]\r\n100%|██████████| 473/473 [00:00<00:00, 295048.45B/s]\r\n100%|██████████| 501200538/501200538 [00:06<00:00, 77291522.27B/s]\r\n---------------------------------------------------------------------------\r\nOSError Traceback (most recent call last)\r\n/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)\r\n 132 try:\r\n--> 133 resolved_config_file = cached_path(config_file, cache_dir=cache_dir, force_download=force_download, proxies=proxies)\r\n 134 except EnvironmentError:\r\n\r\n3 frames\r\nOSError: file roberta-base not found\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)\r\n 143 ', '.join(cls.pretrained_config_archive_map.keys()),\r\n 144 config_file, CONFIG_NAME)\r\n--> 145 raise EnvironmentError(msg)\r\n 146 \r\n 147 if resolved_config_file == config_file:\r\n\r\nOSError: Model name 'roberta-base' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed 'roberta-base' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.\r\n```",
"@GrahamboJangles this does not seem to be the same error. It seems to me that you're trying to load a RoBERTa checkpoint in a BERT model/tokenizer.",
"@LysandreJik - Maybe that is the problem, but `tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')` so I don't see why it would be trying to use a RoBERTa checkpoint unless there's something I'm missing. Also, when I try with the RobertaModel I get the same error.",
"Could you provide a script so that we can try and reproduce the error on our side?",
"@LysandreJik - [Here's my Colab notebook.](https://colab.research.google.com/drive/1TeCwrGAzEH4IMcgLewR8OJRVwlnmJxdd)"
] | 1,571 | 1,572 | 1,571 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] the official example scripts: Quick tour TF 2.0 training and PyTorch interoperability from github homepage
## To Reproduce
Steps to reproduce the behavior:
1. Install libraries (update tensorflow to 2.0.0)
```
!pip install tensorflow-gpu
!pip install torch
!pip install transformers
```
2. Run example
```import tensorflow as tf
import tensorflow_datasets
from transformers import *
# Load dataset, tokenizer, model from pretrained model/vocabulary
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
data = tensorflow_datasets.load('glue/mrpc')
# Prepare dataset for GLUE as a tf.data.Dataset instance
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')
valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='mrpc')
train_dataset = train_dataset.shuffle(100).batch(32).repeat(2)
valid_dataset = valid_dataset.batch(64)
# Prepare training: Compile tf.keras model with optimizer, loss and learning rate schedule
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
# Train and evaluate using tf.keras.Model.fit()
history = model.fit(train_dataset, epochs=2, steps_per_epoch=115,
validation_data=valid_dataset, validation_steps=7)
# Load the TensorFlow model in PyTorch for inspection
model.save_pretrained('./save/')
pytorch_model = BertForSequenceClassification.from_pretrained('./save/', from_tf=True)
# Quickly test a few predictions - MRPC is a paraphrasing task, let's see if our model learned the task
sentence_0 = "This research was consistent with his findings."
sentence_1 = "His findings were compatible with this research."
sentence_2 = "His findings were not compatible with this research."
inputs_1 = tokenizer.encode_plus(sentence_0, sentence_1, add_special_tokens=True, return_tensors='pt')
inputs_2 = tokenizer.encode_plus(sentence_0, sentence_2, add_special_tokens=True, return_tensors='pt')
pred_1 = pytorch_model(**inputs_1)[0].argmax().item()
pred_2 = pytorch_model(**inputs_2)[0].argmax().item()
print("sentence_1 is", "a paraphrase" if pred_1 else "not a paraphrase", "of sentence_0")
print("sentence_2 is", "a paraphrase" if pred_2 else "not a paraphrase", "of sentence_0")
```
3. Get error
```
5 # Load dataset, tokenizer, model from pretrained model/vocabulary
6 tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
----> 7 model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
8 data = tensorflow_datasets.load('glue/mrpc')
9
NameError: name 'TFBertForSequenceClassification' is not defined
```
## Environment
Google collab
I get the same error when trying to use any TF version of the transformers. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1517/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1516 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1516/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1516/comments | https://api.github.com/repos/huggingface/transformers/issues/1516/events | https://github.com/huggingface/transformers/pull/1516 | 506,859,955 | MDExOlB1bGxSZXF1ZXN0MzI3OTYxMzk4 | 1,516 | Fused optimizer and gradient clipper using apex | {
"login": "slayton58",
"id": 4992598,
"node_id": "MDQ6VXNlcjQ5OTI1OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4992598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slayton58",
"html_url": "https://github.com/slayton58",
"followers_url": "https://api.github.com/users/slayton58/followers",
"following_url": "https://api.github.com/users/slayton58/following{/other_user}",
"gists_url": "https://api.github.com/users/slayton58/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slayton58/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slayton58/subscriptions",
"organizations_url": "https://api.github.com/users/slayton58/orgs",
"repos_url": "https://api.github.com/users/slayton58/repos",
"events_url": "https://api.github.com/users/slayton58/events{/privacy}",
"received_events_url": "https://api.github.com/users/slayton58/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1516?src=pr&el=h1) Report\n> Merging [#1516](https://codecov.io/gh/huggingface/transformers/pull/1516?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1d4d07025635c998acf8c7abab426b013e87206c?src=pr&el=desc) will **decrease** coverage by `0.18%`.\n> The diff coverage is `25%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1516?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1516 +/- ##\n==========================================\n- Coverage 85.17% 84.98% -0.19% \n==========================================\n Files 94 94 \n Lines 13920 13953 +33 \n==========================================\n+ Hits 11856 11858 +2 \n- Misses 2064 2095 +31\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1516?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tests/optimization\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1516/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL29wdGltaXphdGlvbl90ZXN0LnB5) | `99.02% <100%> (ø)` | :arrow_up: |\n| [transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/1516/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL29wdGltaXphdGlvbi5weQ==) | `75.2% <18.18%> (-21.43%)` | :arrow_down: |\n| [transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1516/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2ZpbGVfdXRpbHMucHk=) | `74.17% <0%> (-2.2%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1516?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1516?src=pr&el=footer). Last update [1d4d070...a359214](https://codecov.io/gh/huggingface/transformers/pull/1516?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Does this apply that apex's FusedAdam is a fused implementation of AdamW rather than Adam (vanilla)?\r\n\r\nIt might be nice to first try to import [torch's native AdamW](https://pytorch.org/docs/stable/_modules/torch/optim/adamw.html#AdamW) (from 1.2), and if not available fallback to the transformers implementation. Cf. https://github.com/huggingface/transformers/pull/1593",
"https://github.com/huggingface/transformers/pull/1516/files#diff-59de7b854fbd60c6ba87f68027a2db36R208 enables AdamW support in the `FusedAdam` optimizer. \r\n\r\nI see you've done the work to check for native PyT implementation & defining if necessary, it should be an easy rebase & resolve if/when these individual PRs get merged.",
"Oh, my bad. I hadn't noticed this `adam_w_mode=True` in Apex's Adam before. Good to know!",
"Updated to clip gradient outside of gradient accumulation inner loop as we are trying to do now.\r\n\r\nWe should update the other training scripts as well (`run_glue` for instance). I think it may be soon time to refactor and gather the common portions of the examples (which are numerous) so we spend less time synchronizing them.\r\n\r\nWhat do you think @LysandreJik?",
"I agree that some scripts definitely need some refactoring and that having shared pieces of code like that gradient clipping seems like the way to go.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,583 | 1,583 | CONTRIBUTOR | null | Significant (40ms / iter for XLNet squad finetuning) performance increase.
Also adds fused grad clipping, gives further ~30ms / iter saving in the same XLNet squad case.
Redefines the `AdamW` implementation such that the existing code will be used if apex's multi_tensor_apply code isn't available, and should drop-in speedup all existing scripts using `AdamW`. Also abstracts the gradient clipping (in order to keep run scripts concise and move apex-specific logic into `optimizations.py`) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1516/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1516",
"html_url": "https://github.com/huggingface/transformers/pull/1516",
"diff_url": "https://github.com/huggingface/transformers/pull/1516.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1516.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1515 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1515/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1515/comments | https://api.github.com/repos/huggingface/transformers/issues/1515/events | https://github.com/huggingface/transformers/issues/1515 | 506,754,502 | MDU6SXNzdWU1MDY3NTQ1MDI= | 1,515 | Main and train for CTRL model | {
"login": "roholazandie",
"id": 7584674,
"node_id": "MDQ6VXNlcjc1ODQ2NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7584674?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roholazandie",
"html_url": "https://github.com/roholazandie",
"followers_url": "https://api.github.com/users/roholazandie/followers",
"following_url": "https://api.github.com/users/roholazandie/following{/other_user}",
"gists_url": "https://api.github.com/users/roholazandie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roholazandie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roholazandie/subscriptions",
"organizations_url": "https://api.github.com/users/roholazandie/orgs",
"repos_url": "https://api.github.com/users/roholazandie/repos",
"events_url": "https://api.github.com/users/roholazandie/events{/privacy}",
"received_events_url": "https://api.github.com/users/roholazandie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The CTRL model has been added to the [run_generation](https://github.com/huggingface/transformers/blob/master/examples/run_generation.py) script as of now. We will implement it in other scripts as time goes on, but as it has the same API as the other models hosted on our repo it the training script would be very similar to the current training scripts.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,576 | 1,576 | NONE | null | ## 🚀 Feature
I have seen the CTRL model has been added to the repo but I don't see any script to run or train it. Is this going to be added soon?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1515/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1514 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1514/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1514/comments | https://api.github.com/repos/huggingface/transformers/issues/1514/events | https://github.com/huggingface/transformers/issues/1514 | 506,736,406 | MDU6SXNzdWU1MDY3MzY0MDY= | 1,514 | /pytorch/aten/src/THC/THCTensorScatterGather.cu:100: void THCudaTensor_gatherKernel(TensorInfo<Real, IndexType>, TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = 3]: block: [4,0,0], thread: [319,0,0] Assertion `indexValue >= 0 && indexValue < src.sizes[dim]` failed. | {
"login": "DeepFool",
"id": 48155157,
"node_id": "MDQ6VXNlcjQ4MTU1MTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/48155157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DeepFool",
"html_url": "https://github.com/DeepFool",
"followers_url": "https://api.github.com/users/DeepFool/followers",
"following_url": "https://api.github.com/users/DeepFool/following{/other_user}",
"gists_url": "https://api.github.com/users/DeepFool/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DeepFool/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DeepFool/subscriptions",
"organizations_url": "https://api.github.com/users/DeepFool/orgs",
"repos_url": "https://api.github.com/users/DeepFool/repos",
"events_url": "https://api.github.com/users/DeepFool/events{/privacy}",
"received_events_url": "https://api.github.com/users/DeepFool/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Could you please provide more information? Where does this error occur? Are you using one of our example scripts? I believe there are templates you can use so we may help you more efficiently.",
"> Could you please provide more information? Where does this error occur? Are you using one of our example scripts? I believe there are templates you can use so we may help you more efficiently.\r\n\r\nI run run_squad.py in a chinese reading comprehension dataset. I change sereval places in utils_squad.py ,I throw all examples which have no answer because my passage is so longth. if i don't throw all examples without answer, my trained model will predict no answer in test dataset. This error occur in the begining of training. I have no idea why this error happen ?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,571 | 1,582 | 1,582 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I have no idea about this error。 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1514/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1513 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1513/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1513/comments | https://api.github.com/repos/huggingface/transformers/issues/1513/events | https://github.com/huggingface/transformers/pull/1513 | 506,717,326 | MDExOlB1bGxSZXF1ZXN0MzI3ODUwOTMy | 1,513 | Force einsum to run in fp16 | {
"login": "slayton58",
"id": 4992598,
"node_id": "MDQ6VXNlcjQ5OTI1OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4992598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slayton58",
"html_url": "https://github.com/slayton58",
"followers_url": "https://api.github.com/users/slayton58/followers",
"following_url": "https://api.github.com/users/slayton58/following{/other_user}",
"gists_url": "https://api.github.com/users/slayton58/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slayton58/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slayton58/subscriptions",
"organizations_url": "https://api.github.com/users/slayton58/orgs",
"repos_url": "https://api.github.com/users/slayton58/repos",
"events_url": "https://api.github.com/users/slayton58/events{/privacy}",
"received_events_url": "https://api.github.com/users/slayton58/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1513?src=pr&el=h1) Report\n> Merging [#1513](https://codecov.io/gh/huggingface/transformers/pull/1513?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f62f992cf7aa7f1e4eb0d1ef912bd06d26c4dd8c?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1513?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1513 +/- ##\n=======================================\n Coverage 85.98% 85.98% \n=======================================\n Files 91 91 \n Lines 13579 13579 \n=======================================\n Hits 11676 11676 \n Misses 1903 1903\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1513?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1513?src=pr&el=footer). Last update [f62f992...4e6a557](https://codecov.io/gh/huggingface/transformers/pull/1513?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great, thanks a lot for your work on that @slayton58!",
"Why not adopt this change to other finetuning tasks? Currently, I only see the code snippet in the squad task.",
"Einsum is tricky because it can express both tasks that are very likely to be good in fp16 (gemm, batch-gemm) and some that are not (large summations).\n\nIt could be adopted for other tasks but it needs to be done task-by-task (with testing) to ensure that no problems are caused. \n\n> On Jul 12, 2021, at 2:01 AM, Gordon Lee ***@***.***> wrote:\n> \n> \n> Why not adopt this change to other finetuning tasks? Currently, I only see the code snippet in the squad task.\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n"
] | 1,571 | 1,626 | 1,571 | CONTRIBUTOR | null | As noted in the comments, this will force `torch.einsum` to run in fp16 for the squad finetuning task (it should be valid for other tasks, but I haven't verified that) when run with `--fp16_opt_level="O1"` which is the default.
Otherwise, `torch.einsum` is treated as a "promote" operation by `apex.amp`, and if any argument is fp32, all arguments will be cast to fp32, and the answer will return in fp32. This will happen at any point when a parameter is used (XLNet in particular suffers here). Given all uses I've seen for einsum are to express gemm, batched-gemm and transpose, operations we'd normally consider to be safe in fp16, this should be a safe change. From a performance standpoint it allows TensorCore usage which can significantly boost achieved performance.
This change doesn't affect accuracy in my testing, and gives ~20-25% higher throughput on XLNet-based finetuning. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1513/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1513",
"html_url": "https://github.com/huggingface/transformers/pull/1513",
"diff_url": "https://github.com/huggingface/transformers/pull/1513.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1513.patch",
"merged_at": 1571127901000
} |
https://api.github.com/repos/huggingface/transformers/issues/1512 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1512/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1512/comments | https://api.github.com/repos/huggingface/transformers/issues/1512/events | https://github.com/huggingface/transformers/pull/1512 | 506,517,337 | MDExOlB1bGxSZXF1ZXN0MzI3Njk2MjY1 | 1,512 | Fix import error in script to convert faisreq roberta checkpoints | {
"login": "louismartin",
"id": 12654189,
"node_id": "MDQ6VXNlcjEyNjU0MTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/12654189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/louismartin",
"html_url": "https://github.com/louismartin",
"followers_url": "https://api.github.com/users/louismartin/followers",
"following_url": "https://api.github.com/users/louismartin/following{/other_user}",
"gists_url": "https://api.github.com/users/louismartin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/louismartin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/louismartin/subscriptions",
"organizations_url": "https://api.github.com/users/louismartin/orgs",
"repos_url": "https://api.github.com/users/louismartin/repos",
"events_url": "https://api.github.com/users/louismartin/events{/privacy}",
"received_events_url": "https://api.github.com/users/louismartin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1512?src=pr&el=h1) Report\n> Merging [#1512](https://codecov.io/gh/huggingface/transformers/pull/1512?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a701c9b32126f1e6974d9fcb3a5c3700527d8559?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1512?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1512 +/- ##\n=======================================\n Coverage 85.98% 85.98% \n=======================================\n Files 91 91 \n Lines 13579 13579 \n=======================================\n Hits 11676 11676 \n Misses 1903 1903\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1512?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1512?src=pr&el=footer). Last update [a701c9b...49cba6e](https://codecov.io/gh/huggingface/transformers/pull/1512?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks good to me, thanks!"
] | 1,571 | 1,571 | 1,571 | CONTRIBUTOR | null | Fix ImportError in `convert_roberta_original_pytorch_checkpoint_to_pytorch.py`, see #1459.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1512/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1512",
"html_url": "https://github.com/huggingface/transformers/pull/1512",
"diff_url": "https://github.com/huggingface/transformers/pull/1512.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1512.patch",
"merged_at": 1571067633000
} |
https://api.github.com/repos/huggingface/transformers/issues/1511 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1511/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1511/comments | https://api.github.com/repos/huggingface/transformers/issues/1511/events | https://github.com/huggingface/transformers/pull/1511 | 506,470,872 | MDExOlB1bGxSZXF1ZXN0MzI3NjYwNzUy | 1,511 | Run squad with all model lq | {
"login": "qianliu0708",
"id": 20182089,
"node_id": "MDQ6VXNlcjIwMTgyMDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/20182089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qianliu0708",
"html_url": "https://github.com/qianliu0708",
"followers_url": "https://api.github.com/users/qianliu0708/followers",
"following_url": "https://api.github.com/users/qianliu0708/following{/other_user}",
"gists_url": "https://api.github.com/users/qianliu0708/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qianliu0708/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qianliu0708/subscriptions",
"organizations_url": "https://api.github.com/users/qianliu0708/orgs",
"repos_url": "https://api.github.com/users/qianliu0708/repos",
"events_url": "https://api.github.com/users/qianliu0708/events{/privacy}",
"received_events_url": "https://api.github.com/users/qianliu0708/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This does not seem to be related to our repo. Closing. Please don't reopen unless you want to submit a real PR."
] | 1,571 | 1,571 | 1,571 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1511/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1511",
"html_url": "https://github.com/huggingface/transformers/pull/1511",
"diff_url": "https://github.com/huggingface/transformers/pull/1511.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1511.patch",
"merged_at": null
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.