url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/709 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/709/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/709/comments | https://api.github.com/repos/huggingface/transformers/issues/709/events | https://github.com/huggingface/transformers/issues/709 | 459,154,612 | MDU6SXNzdWU0NTkxNTQ2MTI= | 709 | layer_norm_eps | {
"login": "suchithtuple",
"id": 50451555,
"node_id": "MDQ6VXNlcjUwNDUxNTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/50451555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suchithtuple",
"html_url": "https://github.com/suchithtuple",
"followers_url": "https://api.github.com/users/suchithtuple/followers",
"following_url": "https://api.github.com/users/suchithtuple/following{/other_user}",
"gists_url": "https://api.github.com/users/suchithtuple/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suchithtuple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suchithtuple/subscriptions",
"organizations_url": "https://api.github.com/users/suchithtuple/orgs",
"repos_url": "https://api.github.com/users/suchithtuple/repos",
"events_url": "https://api.github.com/users/suchithtuple/events{/privacy}",
"received_events_url": "https://api.github.com/users/suchithtuple/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Because some people wanted to configure this: https://github.com/huggingface/pytorch-pretrained-BERT/pull/585",
"So I have to add `config.layer_norm_eps = 1e-12` if I am taking the config from the link above ?",
"You don't need to, it's the default value when instantiating a `BertConfig` class.",
"When I printed `dir(bert_config)` I am not able to see layer_norm_eps. ",
"I have figured out the problem. I have to initialize like this `BertConfig.from_json_file(config_file_path)` but rather I have done like this `BertConfig(config_file_path)`. My bad :disappointed: \r\n\r\nThanks for the reply. Closing the issue."
] | 1,561 | 1,561 | 1,561 | NONE | null | In [modeling.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/c304593d8fa93f25febe1458c63497a846749c89/pytorch_pretrained_bert/modeling.py#L303) why `self.layer_norm_eps` is written even config don't have these parameter. Check [here](https://github.com/google-research/bert#pre-trained-models)
Or I am missing something. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/709/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/708 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/708/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/708/comments | https://api.github.com/repos/huggingface/transformers/issues/708/events | https://github.com/huggingface/transformers/issues/708 | 458,904,180 | MDU6SXNzdWU0NTg5MDQxODA= | 708 | Future attention masking in GPT/GPT-2? | {
"login": "sksq96",
"id": 8557728,
"node_id": "MDQ6VXNlcjg1NTc3Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8557728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sksq96",
"html_url": "https://github.com/sksq96",
"followers_url": "https://api.github.com/users/sksq96/followers",
"following_url": "https://api.github.com/users/sksq96/following{/other_user}",
"gists_url": "https://api.github.com/users/sksq96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sksq96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sksq96/subscriptions",
"organizations_url": "https://api.github.com/users/sksq96/orgs",
"repos_url": "https://api.github.com/users/sksq96/repos",
"events_url": "https://api.github.com/users/sksq96/events{/privacy}",
"received_events_url": "https://api.github.com/users/sksq96/received_events",
"type": "User",
"site_admin": true
} | [] | closed | false | null | [] | [
"Hi Shubham,\r\nThis is a legacy from the original Tensorflow code (https://github.com/openai/finetune-transformer-lm/blob/master/train.py#L64-L69).",
"Thanks for the link to the original reference. "
] | 1,561 | 1,565 | 1,565 | NONE | null | Based on my understanding, [this](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling_gpt2.py#L288) is the place where future attention masking for the causal model happens.
If this is the case
- Why is it called `bias`?
- Why is this in the core of `Attention` module and not passed as a `attention_mask` parameter similar to BERT?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/708/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/707 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/707/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/707/comments | https://api.github.com/repos/huggingface/transformers/issues/707/events | https://github.com/huggingface/transformers/pull/707 | 458,827,077 | MDExOlB1bGxSZXF1ZXN0MjkwMzQwNjIw | 707 | Update run_squad.py | {
"login": "saikalyan9981",
"id": 30959693,
"node_id": "MDQ6VXNlcjMwOTU5Njkz",
"avatar_url": "https://avatars.githubusercontent.com/u/30959693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saikalyan9981",
"html_url": "https://github.com/saikalyan9981",
"followers_url": "https://api.github.com/users/saikalyan9981/followers",
"following_url": "https://api.github.com/users/saikalyan9981/following{/other_user}",
"gists_url": "https://api.github.com/users/saikalyan9981/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saikalyan9981/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saikalyan9981/subscriptions",
"organizations_url": "https://api.github.com/users/saikalyan9981/orgs",
"repos_url": "https://api.github.com/users/saikalyan9981/repos",
"events_url": "https://api.github.com/users/saikalyan9981/events{/privacy}",
"received_events_url": "https://api.github.com/users/saikalyan9981/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/707?src=pr&el=h1) Report\n> Merging [#707](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/707?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/c304593d8fa93f25febe1458c63497a846749c89?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/707?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #707 +/- ##\n=======================================\n Coverage 62.27% 62.27% \n=======================================\n Files 18 18 \n Lines 3979 3979 \n=======================================\n Hits 2478 2478 \n Misses 1501 1501\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/707?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/707?src=pr&el=footer). Last update [c304593...620f2c1](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/707?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Yes, that's for clarity"
] | 1,561 | 1,562 | 1,562 | NONE | null | model = BertForQuestionAnswering.from_pretrained(args.bert_model) is written twice.
I think the else part is redundant there | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/707/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/707",
"html_url": "https://github.com/huggingface/transformers/pull/707",
"diff_url": "https://github.com/huggingface/transformers/pull/707.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/707.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/706 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/706/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/706/comments | https://api.github.com/repos/huggingface/transformers/issues/706/events | https://github.com/huggingface/transformers/pull/706 | 458,819,769 | MDExOlB1bGxSZXF1ZXN0MjkwMzM1NjU0 | 706 | Update run_squad.py | {
"login": "saikalyan9981",
"id": 30959693,
"node_id": "MDQ6VXNlcjMwOTU5Njkz",
"avatar_url": "https://avatars.githubusercontent.com/u/30959693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saikalyan9981",
"html_url": "https://github.com/saikalyan9981",
"followers_url": "https://api.github.com/users/saikalyan9981/followers",
"following_url": "https://api.github.com/users/saikalyan9981/following{/other_user}",
"gists_url": "https://api.github.com/users/saikalyan9981/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saikalyan9981/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saikalyan9981/subscriptions",
"organizations_url": "https://api.github.com/users/saikalyan9981/orgs",
"repos_url": "https://api.github.com/users/saikalyan9981/repos",
"events_url": "https://api.github.com/users/saikalyan9981/events{/privacy}",
"received_events_url": "https://api.github.com/users/saikalyan9981/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=h1) Report\n> Merging [#706](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/c304593d8fa93f25febe1458c63497a846749c89?src=pr&el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #706 +/- ##\n==========================================\n- Coverage 62.27% 62.22% -0.06% \n==========================================\n Files 18 18 \n Lines 3979 3979 \n==========================================\n- Hits 2478 2476 -2 \n- Misses 1501 1503 +2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `82.44% <0%> (-1.07%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=footer). Last update [c304593...8910034](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=h1) Report\n> Merging [#706](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/c304593d8fa93f25febe1458c63497a846749c89?src=pr&el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #706 +/- ##\n==========================================\n- Coverage 62.27% 62.22% -0.06% \n==========================================\n Files 18 18 \n Lines 3979 3979 \n==========================================\n- Hits 2478 2476 -2 \n- Misses 1501 1503 +2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `82.44% <0%> (-1.07%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=footer). Last update [c304593...8910034](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/706?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,561 | 1,561 | 1,561 | NONE | null | redundant else part, model = BertForQuestionAnswering.from_pretrained(args.bert_model) is already written in a different line | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/706/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/706",
"html_url": "https://github.com/huggingface/transformers/pull/706",
"diff_url": "https://github.com/huggingface/transformers/pull/706.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/706.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/705 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/705/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/705/comments | https://api.github.com/repos/huggingface/transformers/issues/705/events | https://github.com/huggingface/transformers/issues/705 | 458,338,302 | MDU6SXNzdWU0NTgzMzgzMDI= | 705 | Implementing XLNet in pytorch | {
"login": "roholazandie",
"id": 7584674,
"node_id": "MDQ6VXNlcjc1ODQ2NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7584674?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roholazandie",
"html_url": "https://github.com/roholazandie",
"followers_url": "https://api.github.com/users/roholazandie/followers",
"following_url": "https://api.github.com/users/roholazandie/following{/other_user}",
"gists_url": "https://api.github.com/users/roholazandie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roholazandie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roholazandie/subscriptions",
"organizations_url": "https://api.github.com/users/roholazandie/orgs",
"repos_url": "https://api.github.com/users/roholazandie/repos",
"events_url": "https://api.github.com/users/roholazandie/events{/privacy}",
"received_events_url": "https://api.github.com/users/roholazandie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"FYI @roholazandie we are currently working on XLNet with pytorch over here\r\nhttps://github.com/pingpong-ai/XLNet-pytorch/tree/dev/poc",
"I'll add it here also. I was working on a coming release this week anyway.\r\nIt's a mix of BERT/Transformer-XL and something I was also playing with (Two-Stream Self-Attention) so hopefully, it won't delay too much the release. ",
"Awesome, downstream libraries like [flair](https://github.com/zalandoresearch/flair) are really looking forward to use XLNet 🤗 (So expect results on CoNLL and PoS tagging whenever XLNet is implemented here)",
"and [jiant](https://jiant.info)!",
"I had finished Simple XLNet implementation with Pytorch Wrapper here : https://github.com/graykode/xlnet-Pytorch",
"How is this project going?\r\n\r\nI'm debating whether to try to build XLNet into jiant directly, but I'm not eager to replicate your hard work. Tx!",
"Yes we're on track to finish the release this week I think (or next Monday in the worse case).\r\n\r\nWe reproduced the results of XLNet on STS-B (Pearson R > 0.918), the GLUE task showcased on the TF repo, with the same hyper-parameters (didn't try the others tasks but the model is the same for all).\r\n\r\nIt's taking a bit more time than planned because we took the opportunity to refactor the library's back-end. I'm really excited about the new release.\r\n\r\nWe will now have 6 different architectures (BERT, GPT, GPT-2, Transformer-XL, XLNet and XLM) and over 25 associated pretrained weights, all with the same API (for instance the GLUE training script is now the same for all the models!) and direct access to all the models' internals.\r\n\r\nAnd a lot of other things (a lot of tests to avoid future regressions, compatibility with TorchScript, easy serialization and loading of fine-tuned models...)",
"> I'll add it here also. I was working on a coming release this week anyway.\r\n> It's a mix of BERT/Transformer-XL and something I was also playing with (Two-Stream Self-Attention) so hopefully, it won't delay too much the release.\r\n\r\n@thomwolf , very much looking forward to your implementation of BERT/Transformer-XL. Wondering if you were planning to release that too, and if so where you are planning to make it available. Thanks so much ",
"Is there any direct wrapper for QA task using XLNet?",
"I appriciate your hard work. I just saw it today, and you almost implemented.\r\n\r\nIs it really better than Bert, which i find to be amazing so far?",
"Under the conditions in their paper, yes. Question is whether it always is: in what tasks, using the same data set; with same compute time (or energy consumption) for training (+on what hardware as same speed on hardware X doesn't imply same speed on hardware Y); with same number of parameters (model size); with architecture size for same prediction speed; consistently / on average (+confidence interval) for repeated runs with random initialisation..."
] | 1,561 | 1,567 | 1,567 | NONE | null | Due to the new work of [XLNet](arxiv.org/abs/1906.08237 ) and the implementation of it in tensorflow maybe we need to add this one to the current repository.
https://github.com/zihangdai/xlnet | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/705/reactions",
"total_count": 54,
"+1": 41,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 13,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/705/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/704 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/704/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/704/comments | https://api.github.com/repos/huggingface/transformers/issues/704/events | https://github.com/huggingface/transformers/pull/704 | 458,116,594 | MDExOlB1bGxSZXF1ZXN0Mjg5Nzg5MTI0 | 704 | Adjust s3 german Bert file storage | {
"login": "Timoeller",
"id": 3264870,
"node_id": "MDQ6VXNlcjMyNjQ4NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3264870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Timoeller",
"html_url": "https://github.com/Timoeller",
"followers_url": "https://api.github.com/users/Timoeller/followers",
"following_url": "https://api.github.com/users/Timoeller/following{/other_user}",
"gists_url": "https://api.github.com/users/Timoeller/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Timoeller/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Timoeller/subscriptions",
"organizations_url": "https://api.github.com/users/Timoeller/orgs",
"repos_url": "https://api.github.com/users/Timoeller/repos",
"events_url": "https://api.github.com/users/Timoeller/events{/privacy}",
"received_events_url": "https://api.github.com/users/Timoeller/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,560 | 1,561 | 1,561 | CONTRIBUTOR | null | As suggested, keeping model and config files on our s3. Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/704/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/704",
"html_url": "https://github.com/huggingface/transformers/pull/704",
"diff_url": "https://github.com/huggingface/transformers/pull/704.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/704.patch",
"merged_at": 1561734659000
} |
https://api.github.com/repos/huggingface/transformers/issues/703 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/703/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/703/comments | https://api.github.com/repos/huggingface/transformers/issues/703/events | https://github.com/huggingface/transformers/issues/703 | 458,075,677 | MDU6SXNzdWU0NTgwNzU2Nzc= | 703 | "Received 'killed' signal" during the circleci python3 build after submitting PR | {
"login": "chrisyue1998",
"id": 39981081,
"node_id": "MDQ6VXNlcjM5OTgxMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/39981081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisyue1998",
"html_url": "https://github.com/chrisyue1998",
"followers_url": "https://api.github.com/users/chrisyue1998/followers",
"following_url": "https://api.github.com/users/chrisyue1998/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisyue1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisyue1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisyue1998/subscriptions",
"organizations_url": "https://api.github.com/users/chrisyue1998/orgs",
"repos_url": "https://api.github.com/users/chrisyue1998/repos",
"events_url": "https://api.github.com/users/chrisyue1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisyue1998/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yeah I've removed the memory-heavy tests",
"Thanks!"
] | 1,560 | 1,561 | 1,561 | NONE | null | I submitted a PR after modifying convert_gpt2_checkpoint_to_pytorch.py, and my build_py2 test passed, but I received a very vague error from build_py3 (as written in the title of this issue) that caused my build to fail. Does anyone have any ideas as to where the issue could be?
Edit: Attached image below
<img width="625" alt="Screen Shot 2019-06-19 at 3 48 25 PM" src="https://user-images.githubusercontent.com/39981081/59795692-c8f2e280-92a9-11e9-8aa1-77cb96c2db33.png">
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/703/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/702 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/702/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/702/comments | https://api.github.com/repos/huggingface/transformers/issues/702/events | https://github.com/huggingface/transformers/pull/702 | 458,068,099 | MDExOlB1bGxSZXF1ZXN0Mjg5NzUwMDEw | 702 | Add an argument --model_size to convert_gpt2_checkpoint_to_pytorch.py | {
"login": "chrisyue1998",
"id": 39981081,
"node_id": "MDQ6VXNlcjM5OTgxMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/39981081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisyue1998",
"html_url": "https://github.com/chrisyue1998",
"followers_url": "https://api.github.com/users/chrisyue1998/followers",
"following_url": "https://api.github.com/users/chrisyue1998/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisyue1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisyue1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisyue1998/subscriptions",
"organizations_url": "https://api.github.com/users/chrisyue1998/orgs",
"repos_url": "https://api.github.com/users/chrisyue1998/repos",
"events_url": "https://api.github.com/users/chrisyue1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisyue1998/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Thanks I don't think we'll add this since converted models are already provided."
] | 1,560 | 1,567 | 1,567 | NONE | null | Add an argument --model_size to convert_gpt2_checkpoint_to_pytorch.py that lets the user specify whether they want to convert a checkpoint from the 117M model or from the 345M model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/702/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/702",
"html_url": "https://github.com/huggingface/transformers/pull/702",
"diff_url": "https://github.com/huggingface/transformers/pull/702.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/702.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/701 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/701/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/701/comments | https://api.github.com/repos/huggingface/transformers/issues/701/events | https://github.com/huggingface/transformers/issues/701 | 458,065,823 | MDU6SXNzdWU0NTgwNjU4MjM= | 701 | Low SQuADv2 F1 & EM Score | {
"login": "pachiko",
"id": 42316301,
"node_id": "MDQ6VXNlcjQyMzE2MzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/42316301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pachiko",
"html_url": "https://github.com/pachiko",
"followers_url": "https://api.github.com/users/pachiko/followers",
"following_url": "https://api.github.com/users/pachiko/following{/other_user}",
"gists_url": "https://api.github.com/users/pachiko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pachiko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pachiko/subscriptions",
"organizations_url": "https://api.github.com/users/pachiko/orgs",
"repos_url": "https://api.github.com/users/pachiko/repos",
"events_url": "https://api.github.com/users/pachiko/events{/privacy}",
"received_events_url": "https://api.github.com/users/pachiko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@thomwolf \r\nHi, it seems there is something wrong with the training code in this repo.\r\nI used Google's official BERT training code and I could get decent results even with a small batch size: \r\n`{'EM': 73.80717341230668, 'F1': 77.11048422305339, 'AvNA': 80.78315235274762}`\r\n\r\nHere's the settings I used for Google's and this repo's:\r\n```\r\n--vocab_file=uncased_L-12_H-768_A-12/vocab.txt\r\n--bert_config_file=uncased_L-12_H-768_A-12/bert_config.json\r\n--init_checkpoint=uncased_L-12_H-768_A-12/bert_model.ckpt\r\n--do_train=True\r\n--train_file=../squad/data/train-v2.0.json\r\n--do_predict=True\r\n--predict_file=../squad/data/dev-v2.0.json\r\n--train_batch_size=5\r\n--learning_rate=3e-5\r\n--num_train_epochs=2.0\r\n--max_seq_length=384\r\n--doc_stride=128\r\n--output_dir=gugel_bert\r\n--version_2_with_negative=True\r\n--do_lower_case=True\r\n```\r\n```\r\n--bert_model=bert-base-uncased\r\n--output_dir=try_bert_2\r\n--train_file=data/train-v2.0.json\r\n--predict_file=data/dev-v2.0.json\r\n--do_train\r\n--do_predict\r\n--do_lower_case\r\n--train_batch_size=5\r\n--predict_batch_size=5\r\n--num_train_epochs=2.0\r\n--learning_rate=3e-5\r\n--version_2_with_negative\r\n```\r\nEDIT:\r\nForgot to say that I also can get the expected results using SQuAD v1.1 with this repo's code.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,560 | 1,567 | 1,567 | NONE | null | Hi,
I used the default settings and run_squad.py script (with the exception of a batch size of 4 since my GPU has low memory) to train for 3 epochs. Turns out I got an EM & F1 score of 43% and 48% respectively. AvNA looks decent at 65%. Is this due to a small number of epochs or the small batch size? Note that training for 3 epochs on my single 1070-TI took 12 hours.
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/701/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/700 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/700/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/700/comments | https://api.github.com/repos/huggingface/transformers/issues/700/events | https://github.com/huggingface/transformers/pull/700 | 458,060,090 | MDExOlB1bGxSZXF1ZXN0Mjg5NzQzNDY1 | 700 | Add an argument --model_size to convert_gpt2_checkpoint_to_pytorch.py | {
"login": "chrisyue1998",
"id": 39981081,
"node_id": "MDQ6VXNlcjM5OTgxMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/39981081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisyue1998",
"html_url": "https://github.com/chrisyue1998",
"followers_url": "https://api.github.com/users/chrisyue1998/followers",
"following_url": "https://api.github.com/users/chrisyue1998/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisyue1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisyue1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisyue1998/subscriptions",
"organizations_url": "https://api.github.com/users/chrisyue1998/orgs",
"repos_url": "https://api.github.com/users/chrisyue1998/repos",
"events_url": "https://api.github.com/users/chrisyue1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisyue1998/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,560 | 1,560 | 1,560 | NONE | null | Add an argument --model_size to convert_gpt2_checkpoint_to_pytorch.py that lets the user specify whether they want to convert a checkpoint from the 117M model or from the 345M model so that they don't have to create their own 345M json config file. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/700/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/700",
"html_url": "https://github.com/huggingface/transformers/pull/700",
"diff_url": "https://github.com/huggingface/transformers/pull/700.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/700.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/699 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/699/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/699/comments | https://api.github.com/repos/huggingface/transformers/issues/699/events | https://github.com/huggingface/transformers/issues/699 | 457,847,849 | MDU6SXNzdWU0NTc4NDc4NDk= | 699 | Fine tuning GPT-2 for LM objective function | {
"login": "bakszero",
"id": 14965156,
"node_id": "MDQ6VXNlcjE0OTY1MTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/14965156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bakszero",
"html_url": "https://github.com/bakszero",
"followers_url": "https://api.github.com/users/bakszero/followers",
"following_url": "https://api.github.com/users/bakszero/following{/other_user}",
"gists_url": "https://api.github.com/users/bakszero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bakszero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bakszero/subscriptions",
"organizations_url": "https://api.github.com/users/bakszero/orgs",
"repos_url": "https://api.github.com/users/bakszero/repos",
"events_url": "https://api.github.com/users/bakszero/events{/privacy}",
"received_events_url": "https://api.github.com/users/bakszero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Seems like this is now possible with last week's [merged PR](https://github.com/huggingface/pytorch-pretrained-BERT/pull/597), but I'm curious to see what the core devs say about this as well (btw, keep up the great work!)",
"I have the same question :)\r\nI have tried the codes for BERT finetuning which is in lm-finetuning folder but looking for the same script for gpt-2.\r\n\r\nThanks",
"Yes fine-tuning GPT-2 is fixed with #597 indeed.\r\nI'll see if I can add an example but basically changing `gpt` to `gpt-2` in the gpt example should be pretty much fine.",
"@thomwolf Thanks for the great work.\r\njust wondering in order to do unsupervised LM fine-tuning (not classification) on a new dataset, should we just modify run_openai_gpt.py or is there an existing script for that?",
"No existing script for that but you can start from run_openai indeed and use just the `OpenAIGPTLMHeadModel`.\r\nIf you want to supply another example, happy to welcome a PR",
"I am still confused as to how to use the run_openai_gpt.py to finetune gpt2 model. A short example would be helpful",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,560 | 1,571 | 1,571 | NONE | null | I'm looking for finetuning GPT-2 parameters for a custom piece of text, so that the weights are tuned for this piece of text, building from the initial model.
The script here does it for the original tensorflow implementation: https://github.com/nshepperd/gpt-2 , could you please give me suggestions on how to do this finetuning from the Pytorch version and subsequently use it for text generation? It'd be of great help, thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/699/reactions",
"total_count": 9,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/699/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/698 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/698/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/698/comments | https://api.github.com/repos/huggingface/transformers/issues/698/events | https://github.com/huggingface/transformers/issues/698 | 457,606,005 | MDU6SXNzdWU0NTc2MDYwMDU= | 698 | convert_gpt2_checkpoint_to_pytorch dimensions assertion error | {
"login": "chrisyue1998",
"id": 39981081,
"node_id": "MDQ6VXNlcjM5OTgxMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/39981081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisyue1998",
"html_url": "https://github.com/chrisyue1998",
"followers_url": "https://api.github.com/users/chrisyue1998/followers",
"following_url": "https://api.github.com/users/chrisyue1998/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisyue1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisyue1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisyue1998/subscriptions",
"organizations_url": "https://api.github.com/users/chrisyue1998/orgs",
"repos_url": "https://api.github.com/users/chrisyue1998/repos",
"events_url": "https://api.github.com/users/chrisyue1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisyue1998/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, I tried doing the same and was successful(in running the script, at least).\r\nWhat do you specify as your --gpt2_checkpoint_path as? (hope you have copied your finetuned checkpoints to preloaded 117M)?\r\n\r\nUpdate: Currently the output just stores a `config.json` and `pytorch_model.bin`. Suprisingly I don't see the vocab.txt and the merges.txt that they specify in the README. My tensorflow model checkpoint has a vocab.bpe but not merges.txt. So I'm stuck on how to proceed with the following error when running `run_gpt2.py`:\r\n` We assumed '/home/code-base/pytorch-pretrained-BERT/pytorch_pretrained_bert/temp-model/' was a path or url but couldn't find files /home/code-base/pytorch-pretrained-BERT/pytorch_pretrained_bert/temp-model/vocab.json and /home/code-base/pytorch-pretrained-BERT/pytorch_pretrained_bert/temp-model/merges.txt at this path or url.`\r\nSince I'm unable to generate samples using these weights now, any ideas would be great!",
"I was able to fix my issue. I realized that the GPTConfig constructor used in convert_gpt2_checkpoint_to_pytorch.py is only for the 117M model, while I was trying to convert a 345M model. I ended up just making a new json file for the larger model. I never ran into your issues, however, as I didn't use run_gpt2.py.",
"> I was able to fix my issue. I realized that the GPTConfig constructor used in convert_gpt2_checkpoint_to_pytorch.py is only for the 117M model, while I was trying to convert a 345M model. I ended up just making a new json file for the larger model. I never ran into your issues, however, as I didn't use run_gpt2.py.\r\n\r\nSo the new checkpoint files for you contain only a `config.json` and `pytorch_model.bin`?",
"> > I was able to fix my issue. I realized that the GPTConfig constructor used in convert_gpt2_checkpoint_to_pytorch.py is only for the 117M model, while I was trying to convert a 345M model. I ended up just making a new json file for the larger model. I never ran into your issues, however, as I didn't use run_gpt2.py.\r\n> \r\n> So the new checkpoint files for you contain only a `config.json` and `pytorch_model.bin`?\r\n\r\nCorrect",
"How did you end up fixing this problem? What json file did you have to create? I'd also like to convert a 345M model and am running into the problem described in this issue.",
"@dsonbill Sorry, I honestly don't remember. My work was on the laptop I used at my old job, so I don't have access to the files anymore. You could check my really old fork though and maybe find something useful there.",
"Hacky solution by modifying this script: https://raw.githubusercontent.com/huggingface/transformers/master/src/transformers/convert_gpt2_original_tf_checkpoint_to_pytorch.py\r\n\r\nBefore:\r\n```py\r\n # Construct model\r\n if gpt2_config_file == \"\":\r\n config = GPT2Config()\r\n else:\r\n config = GPT2Config.from_json_file(gpt2_config_file)\r\n model = GPT2Model(config)\r\n```\r\n\r\nAfter:\r\n\r\n```py\r\n # Construct model\r\n config = GPT2Config.from_pretrained('gpt2-medium') # Replace 'gpt2-medium' with whichever model spec you're converting\r\n model = GPT2Model(config)\r\n```",
"> Hacky solution by modifying this script: https://raw.githubusercontent.com/huggingface/transformers/master/src/transformers/convert_gpt2_original_tf_checkpoint_to_pytorch.py\r\n> \r\n> Before:\r\n> \r\n> ```python\r\n> # Construct model\r\n> if gpt2_config_file == \"\":\r\n> config = GPT2Config()\r\n> else:\r\n> config = GPT2Config.from_json_file(gpt2_config_file)\r\n> model = GPT2Model(config)\r\n> ```\r\n> \r\n> After:\r\n> \r\n> ```python\r\n> # Construct model\r\n> config = GPT2Config.from_pretrained('gpt2-medium') # Replace 'gpt2-medium' with whichever model spec you're converting\r\n> model = GPT2Model(config)\r\n> ```\r\n\r\nHi, this doesn't work on me. I'm facing the same problem. Does anyone have any ideas?",
"> > Hacky solution by modifying this script: https://raw.githubusercontent.com/huggingface/transformers/master/src/transformers/convert_gpt2_original_tf_checkpoint_to_pytorch.py\r\n> > Before:\r\n> > ```python\r\n> > # Construct model\r\n> > if gpt2_config_file == \"\":\r\n> > config = GPT2Config()\r\n> > else:\r\n> > config = GPT2Config.from_json_file(gpt2_config_file)\r\n> > model = GPT2Model(config)\r\n> > ```\r\n> > \r\n> > \r\n> > After:\r\n> > ```python\r\n> > # Construct model\r\n> > config = GPT2Config.from_pretrained('gpt2-medium') # Replace 'gpt2-medium' with whichever model spec you're converting\r\n> > model = GPT2Model(config)\r\n> > ```\r\n> \r\n> Hi, this doesn't work on me. I'm facing the same problem. Does anyone have any ideas?\r\n\r\nI could get past the initial error by setting the config to be hparams.json\r\n\r\n```export OPENAI_GPT2_CHECKPOINT_PATH=gpt2/355M\r\n\r\ntransformers-cli convert --model_type gpt2 \\\r\n --tf_checkpoint $OPENAI_GPT2_CHECKPOINT_PATH/model.ckpt \\\r\n --pytorch_dump_output $OPENAI_GPT2_CHECKPOINT_PATH \\\r\n --config $OPENAI_GPT2_CHECKPOINT_PATH/hparams.json\r\n```\r\n\r\nwhich seems to create the model checkpoint. I haven't check to see if the converted model works yet though. "
] | 1,560 | 1,598 | 1,560 | NONE | null | I finetuned a GPT-2 model using TensorFlow (using https://github.com/nshepperd/gpt-2), and I tried to run the TF to PyTorch conversion script, but I got this error:
`Traceback (most recent call last):
File "convert_gpt2_checkpoint_to_pytorch.py", line 72, in <module>
args.pytorch_dump_folder_path)
File "convert_gpt2_checkpoint_to_pytorch.py", line 39, in convert_gpt2_checkpoint_to_pytorch
load_tf_weights_in_gpt2(model, gpt2_checkpoint_path)
File "/Users/UAC897/Documents/kepler/venv/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling_gpt2.py", line 90, in load_tf_weights_in_gpt2
assert pointer.shape == array.shape
AssertionError: (torch.Size([2304]), (3072,))`
Does anyone have any ideas? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/698/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/697 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/697/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/697/comments | https://api.github.com/repos/huggingface/transformers/issues/697/events | https://github.com/huggingface/transformers/pull/697 | 457,517,431 | MDExOlB1bGxSZXF1ZXN0Mjg5MzEwMzYx | 697 | Updating examples | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697?src=pr&el=h1) Report\n> Merging [#697](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/3763f8944dc3fef8afb0c525a2ced8a04889c14f?src=pr&el=desc) will **decrease** coverage by `6%`.\n> The diff coverage is `62.5%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #697 +/- ##\n==========================================\n- Coverage 68.23% 62.22% -6.01% \n==========================================\n Files 18 18 \n Lines 3976 3979 +3 \n==========================================\n- Hits 2713 2476 -237 \n- Misses 1263 1503 +240\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/tokenization.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uLnB5) | `89.04% <ø> (-3.66%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/modeling\\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfb3BlbmFpLnB5) | `68.11% <50%> (-12.11%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/modeling.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmcucHk=) | `79.49% <66.66%> (-9.06%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfZ3B0Mi5weQ==) | `69.79% <66.66%> (-12.05%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `52.93% <0%> (-6.29%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/tokenization\\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX29wZW5haS5weQ==) | `75.64% <0%> (-5.7%)` | :arrow_down: |\n| ... and [2 more](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697?src=pr&el=footer). Last update [3763f89...411981a](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/697?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,560 | 1,566 | 1,561 | MEMBER | null | This PR check that the examples are working well (fix learning rate bug in distributed settings)
Also:
- prepare 2 fine-tuned models on SQuAD (BERT Whole Word Masking) so people can also use fine-tuned models (nice performances: "exact_match": 86.9, "f1": 93.2, better than the original Google AI values)
- add a bertology script which showcases:
* computing head attention entropy
* computing head importance scores according to http://arxiv.org/abs/1905.10650
* performing head masking and head pruning (like masking but you actually remove the weights) according to http://arxiv.org/abs/1905.10650 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/697/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/697",
"html_url": "https://github.com/huggingface/transformers/pull/697",
"diff_url": "https://github.com/huggingface/transformers/pull/697.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/697.patch",
"merged_at": 1561017505000
} |
https://api.github.com/repos/huggingface/transformers/issues/696 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/696/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/696/comments | https://api.github.com/repos/huggingface/transformers/issues/696/events | https://github.com/huggingface/transformers/pull/696 | 457,357,054 | MDExOlB1bGxSZXF1ZXN0Mjg5MTc5ODc5 | 696 | Split config weights | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696?src=pr&el=h1) Report\n> Merging [#696](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/a6f2511811f08c24184f8162f226f252cb6ceaa4?src=pr&el=desc) will **decrease** coverage by `0.13%`.\n> The diff coverage is `68.18%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #696 +/- ##\n==========================================\n- Coverage 68.37% 68.23% -0.14% \n==========================================\n Files 18 18 \n Lines 3990 3976 -14 \n==========================================\n- Hits 2728 2713 -15 \n- Misses 1262 1263 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/modeling\\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfb3BlbmFpLnB5) | `80.21% <100%> (-0.09%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfZ3B0Mi5weQ==) | `81.83% <100%> (-0.08%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `59.21% <100%> (-0.12%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/modeling.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmcucHk=) | `88.55% <56.25%> (-0.61%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/tokenization.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uLnB5) | `92.69% <0%> (+0.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696?src=pr&el=footer). Last update [a6f2511...f964753](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/696?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,560 | 1,566 | 1,560 | MEMBER | null | Split config and weights files for Bert also (was only done for GPT/GPT-2/Transformer-XL.
This will:
- make the Bert model instantiation faster (no need to untar an archive)
- simplify the distributed training (no need to have one archive for each process). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/696/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/696",
"html_url": "https://github.com/huggingface/transformers/pull/696",
"diff_url": "https://github.com/huggingface/transformers/pull/696.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/696.patch",
"merged_at": 1560850978000
} |
https://api.github.com/repos/huggingface/transformers/issues/695 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/695/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/695/comments | https://api.github.com/repos/huggingface/transformers/issues/695/events | https://github.com/huggingface/transformers/issues/695 | 457,187,421 | MDU6SXNzdWU0NTcxODc0MjE= | 695 | BERT output not deterministic | {
"login": "yspaik",
"id": 15726007,
"node_id": "MDQ6VXNlcjE1NzI2MDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/15726007?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yspaik",
"html_url": "https://github.com/yspaik",
"followers_url": "https://api.github.com/users/yspaik/followers",
"following_url": "https://api.github.com/users/yspaik/following{/other_user}",
"gists_url": "https://api.github.com/users/yspaik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yspaik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yspaik/subscriptions",
"organizations_url": "https://api.github.com/users/yspaik/orgs",
"repos_url": "https://api.github.com/users/yspaik/repos",
"events_url": "https://api.github.com/users/yspaik/events{/privacy}",
"received_events_url": "https://api.github.com/users/yspaik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"As with all the other issues about Bert being not deterministic (#403, #679, #432, #475, #265, #278), it's likely because you didn't set the model in eval mode to desactivate the DropOut modules: `model.eval()`.\r\n\r\nI will try to emphasize this more in the examples of the readme because this issue keeps being raised.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> Epoch 1/6\r\n> loss: 2.0674 - bert_loss: 1.0283 - bert_1_loss: 1.0390 - bert_accuracy: 0.6604 - bert_1_accuracy: 0.6650\r\n> \r\n> Epoch 2/6\r\n> loss: 1.7190 - bert_loss: 0.8604 - bert_1_loss: 0.8586 - bert_accuracy: 0.7000 - bert_1_accuracy: 0.7081\r\n> \r\n> Epoch 3/6\r\n> loss: 1.5244 - bert_loss: 0.7715 - bert_1_loss: 0.7528 - bert_accuracy: 0.7250 - bert_1_accuracy: 0.7424\r\n> \r\n> Epoch 4/6\r\n> loss: 1.3203 - bert_loss: 0.6765 - bert_1_loss: 0.6438 - bert_accuracy: 0.7585 - bert_1_accuracy: 0.7741\r\n> \r\n> Epoch 5/6\r\n> loss: 1.1102 - bert_loss: 0.5698 - bert_1_loss: 0.5404 - bert_accuracy: 0.7936 - bert_1_accuracy: \r\n> 0.8082 - val_loss: 0.7052 - val_bert_loss: 0.3709 - val_bert_1_loss: 0.3343 - val_bert_accuracy: 0.8687 - val_bert_1_accuracy: 0.8803\r\n> Epoch 6/6\r\n> ETA: 0s - loss: 0.9269 - bert_loss: 0.4823 - bert_1_loss: 0.4446 - bert_accuracy: 0.8287 - bert_1_accuracy: 0.8452\r\n> bert_loss: 0.4823 - bert_1_loss: 0.4446 - bert_accuracy: 0.8287 - bert_1_accuracy: 0.8452`\r\n\r\nI have the same problem in tensorflow and I configured the model in order to consider Dropout only during the training phase (training=True). But I still have random outputs after each prediction.\r\n\r\nAs you can see during the training phase performance gets better so I guess that the problem is on the prediction"
] | 1,560 | 1,589 | 1,566 | NONE | null | BERT output is not deterministic.
I expect the output values are deterministic when I put a same input, but my bert model the values are changing. Sounds awkwardly, the same value is returned twice, once. That is, once another value comes out, the same value comes out and it repeats.
How I can make the output deterministic?
let me show snippets of my code.
I use the model as below.
```
tokenizer = BertTokenizer.from_pretrained(self.bert_type, do_lower_case=self.do_lower_case, cache_dir=self.bert_cache_path)
pretrain_bert = BertModel.from_pretrained(self.bert_type, cache_dir=self.bert_cache_path)
bert_config = pretrain_bert.config
```
Get the output like this
```
all_encoder_layer, pooled_output = self.model_bert(all_input_ids, all_segment_ids, all_input_mask)
# all_encoder_layer: BERT outputs from all layers.
# pooled_output: output of [CLS] vec.
```
pooled_output
```
tensor([[-3.3997e-01, 2.6870e-01, -2.8109e-01, -2.0018e-01, -8.6849e-02,
tensor([[ 7.4340e-02, -3.4894e-03, -4.9583e-03, 6.0806e-02, 8.5685e-02,
tensor([[-3.3997e-01, 2.6870e-01, -2.8109e-01, -2.0018e-01, -8.6849e-02,
tensor([[ 7.4340e-02, -3.4894e-03, -4.9583e-03, 6.0806e-02, 8.5685e-02,
````
for the all encoder layer, the situation is same, - same in twice an once.
I extract word embedding feature from the bert, and the situation is same.
```
wemb_n
tensor([[[ 0.1623, 0.4293, 0.1031, ..., -0.0434, -0.5156, -1.0220],
tensor([[[ 0.0389, 0.5050, 0.1327, ..., 0.3232, 0.2232, -0.5383],
tensor([[[ 0.1623, 0.4293, 0.1031, ..., -0.0434, -0.5156, -1.0220],
tensor([[[ 0.0389, 0.5050, 0.1327, ..., 0.3232, 0.2232, -0.5383],
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/695/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/694 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/694/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/694/comments | https://api.github.com/repos/huggingface/transformers/issues/694/events | https://github.com/huggingface/transformers/pull/694 | 456,855,648 | MDExOlB1bGxSZXF1ZXN0Mjg4NzgyNjQ0 | 694 | Release 0.6.3 | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694?src=pr&el=h1) Report\n> Merging [#694](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/80684f6f86c13a89fc1e4feac248ef96b013765c?src=pr&el=desc) will **increase** coverage by `1.17%`.\n> The diff coverage is `95.76%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #694 +/- ##\n==========================================\n+ Coverage 67.19% 68.37% +1.17% \n==========================================\n Files 18 18 \n Lines 3847 3990 +143 \n==========================================\n+ Hits 2585 2728 +143 \n Misses 1262 1262\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `59.33% <ø> (ø)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/tokenization\\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX29wZW5haS5weQ==) | `81.34% <ø> (ø)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX3RyYW5zZm9feGwucHk=) | `32.59% <ø> (ø)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/tokenization.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uLnB5) | `91.78% <ø> (-0.92%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `83.51% <ø> (+1.06%)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/modeling\\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfb3BlbmFpLnB5) | `80.3% <93.65%> (+2%)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfZ3B0Mi5weQ==) | `81.91% <94.93%> (+2.52%)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/modeling.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmcucHk=) | `89.16% <97.87%> (+0.59%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694?src=pr&el=footer). Last update [80684f6...4447f27](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/694?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,560 | 1,566 | 1,560 | MEMBER | null | Preparing release 0.6.3
- adding Bert whole word masking models
- BERTology:
- add head masking, head pruning and optional output of multi-head attention output gradients
- output all layers hidden states in GPT/GPT-2
- PyTorch Hub: adding and checking all the models
- various clean-ups and doc/test improvements | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/694/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/694",
"html_url": "https://github.com/huggingface/transformers/pull/694",
"diff_url": "https://github.com/huggingface/transformers/pull/694.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/694.patch",
"merged_at": 1560781646000
} |
https://api.github.com/repos/huggingface/transformers/issues/693 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/693/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/693/comments | https://api.github.com/repos/huggingface/transformers/issues/693/events | https://github.com/huggingface/transformers/issues/693 | 456,725,533 | MDU6SXNzdWU0NTY3MjU1MzM= | 693 | Have no GPU to train language modelling | {
"login": "khaerulumam42",
"id": 35139151,
"node_id": "MDQ6VXNlcjM1MTM5MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/35139151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khaerulumam42",
"html_url": "https://github.com/khaerulumam42",
"followers_url": "https://api.github.com/users/khaerulumam42/followers",
"following_url": "https://api.github.com/users/khaerulumam42/following{/other_user}",
"gists_url": "https://api.github.com/users/khaerulumam42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khaerulumam42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khaerulumam42/subscriptions",
"organizations_url": "https://api.github.com/users/khaerulumam42/orgs",
"repos_url": "https://api.github.com/users/khaerulumam42/repos",
"events_url": "https://api.github.com/users/khaerulumam42/events{/privacy}",
"received_events_url": "https://api.github.com/users/khaerulumam42/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"[You can train a tensorflow model using google colab for free](https://github.com/google-research/bert#using-bert-in-colab). After training it, you can [convert your tf model to pytorch](https://github.com/huggingface/pytorch-pretrained-BERT#command-line-interface). ",
"Or use 300 usd credit for google cloud, that you get when you signup i believe.",
"Thank you @oliverguhr and @Oxi84 for suggestions. \n\nI have tried both methods, using google colabs and GPU as runtime processor, it took about 240hours for every epoch (maybe if I use apex, it will be faster but I think still hundreds of hours), i think it's impossible to run google colabs dozens of days.\n\nI got free trial for GCP, unfortunately Google not provide GPU for free trial version. I try training use GCP with 2CPU and 13GB RAM, it take 200thousands of hours training, is sooo long time.\n\nMaybe I should reduce corpus size?\n\nThanks",
"Or smaller vocabulary. I am pretty sure you can even use TPU on Google cloud, let someone else confirm that.",
"@Oxi84 \r\nFor my classification task, I noticed that training the model with just 40 mb of data will give me already pretty good results. Training with the full 1,5 GB of my dataset improves the results by just 2-3% accuracy. \r\nSo you might start with a (random) subset of your data and improve the size step by step and see if your scores get better. ",
"Oh nice insight @oliverguhr , thank you. I will try to reduce training data and train.\r\n"
] | 1,560 | 1,560 | 1,560 | NONE | null | Sorry I open this issue, is not issue of this repository.
I very appreciate what the authors created this repository, help us to more understand how BERT works and implement on several tasks.
So I have a problem with training because I have not GPU to train language modelling, I have Indonesian dataset (about 2GB) that trainable for language modelling using this repo, could anyone help me to train this dataset? If you could help, you have permission to open source or use the model trained.
I hope it will be give more models provided and to make NLP community more interest in latest NLP models especially Indonesian.
You can email me directly on [email protected] or comment below.
Thank you very much | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/693/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/692 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/692/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/692/comments | https://api.github.com/repos/huggingface/transformers/issues/692/events | https://github.com/huggingface/transformers/issues/692 | 456,611,585 | MDU6SXNzdWU0NTY2MTE1ODU= | 692 | Include a reference on in-domain LM pre-training for BERT | {
"login": "lopuhin",
"id": 424613,
"node_id": "MDQ6VXNlcjQyNDYxMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/424613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lopuhin",
"html_url": "https://github.com/lopuhin",
"followers_url": "https://api.github.com/users/lopuhin/followers",
"following_url": "https://api.github.com/users/lopuhin/following{/other_user}",
"gists_url": "https://api.github.com/users/lopuhin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lopuhin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lopuhin/subscriptions",
"organizations_url": "https://api.github.com/users/lopuhin/orgs",
"repos_url": "https://api.github.com/users/lopuhin/repos",
"events_url": "https://api.github.com/users/lopuhin/events{/privacy}",
"received_events_url": "https://api.github.com/users/lopuhin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Ah, thank you very much for this! I'll read over the paper and include it as a reference soon.",
"Finally read it once I had some free time at the weekend and added PR #715. Thank you!",
"Thank you @Rocketknight1 !",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,560 | 1,567 | 1,567 | NONE | null | From https://github.com/huggingface/pytorch-pretrained-BERT/tree/master/examples/lm_finetuning#introduction
> As such, it's hard to predict what effect this step will have on final model performance, but it's reasonable to conjecture that this approach can improve the final classification performance, especially when a large unlabelled corpus from the target domain is available, labelled data is limited, or the target domain is very unusual and different from 'normal' English text.
> If you are aware of any literature on this subject, please feel free to add it in here, or open an issue and tag me (@Rocketknight1) and I'll include it.
Hi @Rocketknight1 this paper https://arxiv.org/pdf/1905.05583.pdf studies within-task and within-domain pre-training for BERT in section 5.4 and they achieve a good boost from it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/692/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/691 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/691/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/691/comments | https://api.github.com/repos/huggingface/transformers/issues/691/events | https://github.com/huggingface/transformers/pull/691 | 456,537,080 | MDExOlB1bGxSZXF1ZXN0Mjg4NTU2NDE3 | 691 | import class "GPT2MultipleChoiceHead" | {
"login": "vanche",
"id": 10228650,
"node_id": "MDQ6VXNlcjEwMjI4NjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/10228650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vanche",
"html_url": "https://github.com/vanche",
"followers_url": "https://api.github.com/users/vanche/followers",
"following_url": "https://api.github.com/users/vanche/following{/other_user}",
"gists_url": "https://api.github.com/users/vanche/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vanche/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vanche/subscriptions",
"organizations_url": "https://api.github.com/users/vanche/orgs",
"repos_url": "https://api.github.com/users/vanche/repos",
"events_url": "https://api.github.com/users/vanche/events{/privacy}",
"received_events_url": "https://api.github.com/users/vanche/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691?src=pr&el=h1) Report\n> Merging [#691](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/b3f9e9451b3f999118f2299229bb13f2f691c48f?src=pr&el=desc) will **increase** coverage by `0.16%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #691 +/- ##\n==========================================\n+ Coverage 67.08% 67.24% +0.16% \n==========================================\n Files 18 18 \n Lines 3846 3847 +1 \n==========================================\n+ Hits 2580 2587 +7 \n+ Misses 1266 1260 -6\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvX19pbml0X18ucHk=) | `100% <ø> (ø)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX3RyYW5zZm9feGwucHk=) | `32.59% <0%> (+0.18%)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `83.51% <0%> (+0.53%)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/optimization.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvb3B0aW1pemF0aW9uLnB5) | `74.26% <0%> (+0.73%)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/tokenization.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uLnB5) | `92.69% <0%> (+0.91%)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/file\\_utils.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvZmlsZV91dGlscy5weQ==) | `67.78% <0%> (+1.34%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691?src=pr&el=footer). Last update [b3f9e94...8289646](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/691?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks @vanche!"
] | 1,560 | 1,560 | 1,560 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/691/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/691",
"html_url": "https://github.com/huggingface/transformers/pull/691",
"diff_url": "https://github.com/huggingface/transformers/pull/691.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/691.patch",
"merged_at": 1560633176000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/690 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/690/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/690/comments | https://api.github.com/repos/huggingface/transformers/issues/690/events | https://github.com/huggingface/transformers/pull/690 | 456,489,857 | MDExOlB1bGxSZXF1ZXN0Mjg4NTI0NTE2 | 690 | Transformer XL ProjectedAdaptiveLogSoftmax output fix | {
"login": "shashwath94",
"id": 7631779,
"node_id": "MDQ6VXNlcjc2MzE3Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7631779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shashwath94",
"html_url": "https://github.com/shashwath94",
"followers_url": "https://api.github.com/users/shashwath94/followers",
"following_url": "https://api.github.com/users/shashwath94/following{/other_user}",
"gists_url": "https://api.github.com/users/shashwath94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shashwath94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shashwath94/subscriptions",
"organizations_url": "https://api.github.com/users/shashwath94/orgs",
"repos_url": "https://api.github.com/users/shashwath94/repos",
"events_url": "https://api.github.com/users/shashwath94/events{/privacy}",
"received_events_url": "https://api.github.com/users/shashwath94/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Perfect, thanks @shashwath94!"
] | 1,560 | 1,560 | 1,560 | CONTRIBUTOR | null | Fixes the return value of `ProjectedAdaptiveLogSoftmax` layer for Transformer XL when it is a standard softmax without cutoffs (n_clusters=0). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/690/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/690",
"html_url": "https://github.com/huggingface/transformers/pull/690",
"diff_url": "https://github.com/huggingface/transformers/pull/690.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/690.patch",
"merged_at": 1560633250000
} |
https://api.github.com/repos/huggingface/transformers/issues/689 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/689/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/689/comments | https://api.github.com/repos/huggingface/transformers/issues/689/events | https://github.com/huggingface/transformers/issues/689 | 456,441,010 | MDU6SXNzdWU0NTY0NDEwMTA= | 689 | Failing to run pregenerate_training_data.py & finetune_on_pregenerated.py | {
"login": "SamuelLarkin",
"id": 7314973,
"node_id": "MDQ6VXNlcjczMTQ5NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7314973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamuelLarkin",
"html_url": "https://github.com/SamuelLarkin",
"followers_url": "https://api.github.com/users/SamuelLarkin/followers",
"following_url": "https://api.github.com/users/SamuelLarkin/following{/other_user}",
"gists_url": "https://api.github.com/users/SamuelLarkin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamuelLarkin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamuelLarkin/subscriptions",
"organizations_url": "https://api.github.com/users/SamuelLarkin/orgs",
"repos_url": "https://api.github.com/users/SamuelLarkin/repos",
"events_url": "https://api.github.com/users/SamuelLarkin/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamuelLarkin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Your server probably can't reach AWS to download the models.\r\nI need to make these error messages more clear, they currently gather several failure cases.\r\nWill do that in the coming release of next week.",
"What should I do if I have downloaded the package manually?",
"If you download them manually, you will have to figure out what is their file name in the cache. My problem was that on the cluster I'm using, worker nodes don't have access to the internet but the interactive nodes do. It's when I was tracing the code on an interactive node that I downloaded the models. Then, to run on a worker node, I define `export PYTORCH_PRETRAINED_BERT_CACHE=$BERT_MODEL_HOME/pytorch_pretrained_bert` which obviously is pointing to my cache.\r\n\r\nBack to downloading manually, you will have to \"properly\" name your model. Looking at my cache, I see\r\n`\r\na803ce83ca27fecf74c355673c434e51c265fb8a3e0e57ac62a80e38ba98d384.681017f415dfb33ec8d0e04fe51a619f3f01532ecea04edbfd48c5d160550d9c\r\n` which is actually `bert-base-cased.tar.gz`. The name of the model's file is base on some web id (sorry not familiar with this).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,560 | 1,568 | 1,568 | CONTRIBUTOR | null | Hi,
I would like to fine tune BERT using my own data.
```
readonly model=bert-base-multilingual-cased
export PYTORCH_PRETRAINED_BERT_CACHE=.
#[https://github.com/huggingface/pytorch-pretrained-BERT/tree/master/examples/lm_finetuning](BERT Model Finetuning using Masked Language Modeling objective)
# pytorch-pretrained-bert convert_tf_checkpoint_to_pytorch $bert_model_home/multi_cased_L-12_H-768_A-12/{bert_model.ckpt,bert_config.json} bert-base-multilingual-cased
#zcat --force corpora/*.{en,fr} > my_corpus.txt
#zcat --force corpora/unannotated_seq.{en,fr} > my_corpus.txt
mkdir -p training
#Pregenerating training data
python3 pytorch-pretrained-BERT/examples/lm_finetuning/pregenerate_training_data.py \
--train_corpus my_corpus.txt \
--bert_model $model \
--output_dir training/ \
--epochs_to_generate 3 \
--max_seq_len 256
mkdir -p finetuned_lm
#Training on pregenerated data
python3 pytorch-pretrained-BERT/examples/lm_finetuning/finetune_on_pregenerated.py \
--pregenerated_data training/ \
--bert_model $model \
--output_dir finetuned_lm/ \
--epochs 3
```
Then I get
```
Model name 'bert-base-multilingual-cased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-vocab.txt' was a path or url but couldn't find any file associated to this path or url.
Traceback (most recent call last):
File "pytorch-pretrained-BERT/examples/lm_finetuning/pregenerate_training_data.py", line 338, in <module>
main()
File "pytorch-pretrained-BERT/examples/lm_finetuning/pregenerate_training_data.py", line 293, in main
vocab_list = list(tokenizer.vocab.keys())
AttributeError: 'NoneType' object has no attribute 'vocab'
No training data was found!
```
```
wc my_corpus.txt
390400 my_corpus.txt
```
Why is `bert-base-multilingual-cased` not found in `(bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese)` when it is clearly there?
Why am I getting a NoteType for the vocab, I thought it was supposed to be autodownloaded if missing.
What does `bert-base-multilingual-cased` represent? Should I have converted beforehand a tf model to a pytorch model and named it `bert-base-multilingual-cased` represent? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/689/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/688 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/688/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/688/comments | https://api.github.com/repos/huggingface/transformers/issues/688/events | https://github.com/huggingface/transformers/pull/688 | 456,321,061 | MDExOlB1bGxSZXF1ZXN0Mjg4Mzg5Mjgz | 688 | Add German Bert model to code, update readme | {
"login": "Timoeller",
"id": 3264870,
"node_id": "MDQ6VXNlcjMyNjQ4NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3264870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Timoeller",
"html_url": "https://github.com/Timoeller",
"followers_url": "https://api.github.com/users/Timoeller/followers",
"following_url": "https://api.github.com/users/Timoeller/following{/other_user}",
"gists_url": "https://api.github.com/users/Timoeller/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Timoeller/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Timoeller/subscriptions",
"organizations_url": "https://api.github.com/users/Timoeller/orgs",
"repos_url": "https://api.github.com/users/Timoeller/repos",
"events_url": "https://api.github.com/users/Timoeller/events{/privacy}",
"received_events_url": "https://api.github.com/users/Timoeller/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This looks great @Timoeller – do you have an estimate for the compute power you used to train your model?\r\n\r\nUPDATE. Ok the answer is in the blogpost: https://deepset.ai/german-bert\r\n> We trained using Google's Tensorflow code on a single cloud TPU v2 with standard settings. \r\n> We trained 840k steps with a batch size of 1024 for sequence length 128. Training took about 9 days. \r\n",
"Sorry, I just realized we made a wrong oversimplification. We of course trained in the end for 30k steps on a longer batch size.\r\nI updated the article accordingly: We trained 810k steps with a batch size of 1024 for sequence length 128 and 30k steps with sequence length 512. Training took about 9 days. ",
"Looks great, thanks a lot @Timoeller!",
"@Timoeller (and @tholor also I guess): in the coming release 0.6.3, I'm switching to a split file format for Bert (like already done in GPT/GPT-2/Transformer-XL) in which we separately store config and weights files on the S3 to avoid having to untar an archive at each instantiation of the model.\r\n\r\nIn the short term I'll be storing your model's files on our s3 but you can also split the archive yourself and I can switch back to your s3 if you would like to.",
"Correctly guessed. @tholor and me are working together. I also created another PR for the updated file locations.",
"Hello, could you please share how did you generate the vocab list? ",
"This is the code how we did it. There was a special \",\" symbol at index 0, which got used unintentionally as padding token. So we swapped the first two strings in the vocab.txt and created a \"[unused3001]\". See also the discussion in: https://github.com/huggingface/pytorch-transformers/issues/778\r\n\r\nHope that helps.\r\n\r\n```\r\n\r\nspm.SentencePieceTrainer.Train(\r\n f'--input={INPUT_FILE} --model_prefix={TEMP_FILE} --vocab_size={VOCAB_SIZE} --character_coverage=1.0 --model_type=bpe')\r\n\r\ndf = pd.read_csv(TEMP_FILE + \".vocab\", sep=\"\\t\", # use a char for separation that cannot be inside vocab\r\n header=None, names=[\"vocab\", \"unk\"],\r\n encoding='utf-8', dtype=str, quotechar=\"\\r\", # use a char for quoting that cannot be inside vocab\r\n engine='python')\r\nvocab = df.vocab.values\r\nprint(vocab.shape)\r\n\r\nprint(len(vocab))\r\nfor i, current in enumerate(vocab):\r\n current = str(current)\r\n if current.startswith(\"▁\"):\r\n vocab[i] = current[1:]\r\n else:\r\n vocab[i] = \"##\" + current\r\n\r\nunused = []\r\n\r\nfor i in range(1, UNUSED_TOKENS + 1):\r\n unused.append(\"[unused%i]\" % i)\r\ntoadd = np.array([\"[PAD]\", \"[UNK]\", \"[CLS]\", \"[SEP]\", \"[MASK]\"])\r\nvocab = np.concatenate((toadd, vocab[3:], unused), axis=0)\r\n```",
"@Timoeller thanks for the open sourcing the model :+1: and the great [FARM](https://github.com/deepset-ai/FARM) library.\r\n\r\nI trained a German BERT model from scratch a while ago (16GB of text, incl. some WMT monolingual data for German) and here are some preliminary results:\r\n\r\n| Task | Result\r\n| --------- | -------\r\n| CoNLL-2003 | 85.49\r\n| GermEval | 84.38\r\n| GermEval18Coarse | 74.60 (reproduced result for German BERT was 74.06)\r\n\r\nSo on average the model is +0.48% better. My question to @Timoeller and @thomwolf does it make sense to include another cased German BERT model 🤔\r\n",
"Hey @stefan-it \r\nThanks for linking our library and also sharing your results. We have made the experience that multiple downstream runs vary in performance by some degree. We are also in contact with people applying German Bert for germeval19, using an ensemble of multiple downstream runs with quite good results (the official ranking isnt out yet though). \r\n\r\nConcerning the performance of your Bert model: Your NER results seem to be consistently better than with our German Bert. Maybe it lacks behind in the other tasks? \r\n\r\nHow about we both have a call this week to see where the differences of our Berts come from and if it makes sense to include your model as well? Just pm me. ",
"Awesome, I just send you an email :)"
] | 1,560 | 1,567 | 1,560 | CONTRIBUTOR | null | We have been training a German BERT model from scratch on some 12 GB of clean text. It outperforms the multilingual BERT (cased + uncased) in 4 out of 5 German NLP tasks.

Furthermore we evaluated the pre-training not just by observing the train loss, but continuous downstream task checks.
For more details on our experiments you can check our official post here: https://deepset.ai/german-bert
Code-wise we only had to do very few adoptions to the dictionaries for model, vocab and sequence length. We also included our evaluation post into the README, because we believe it might be interesting for the community. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/688/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/688/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/688",
"html_url": "https://github.com/huggingface/transformers/pull/688",
"diff_url": "https://github.com/huggingface/transformers/pull/688.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/688.patch",
"merged_at": 1560633222000
} |
https://api.github.com/repos/huggingface/transformers/issues/687 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/687/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/687/comments | https://api.github.com/repos/huggingface/transformers/issues/687/events | https://github.com/huggingface/transformers/pull/687 | 456,302,598 | MDExOlB1bGxSZXF1ZXN0Mjg4Mzc0MzUw | 687 | Updating tests and doc | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=h1) Report\n> :exclamation: No coverage uploaded for pull request base (`master@cad88e1`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #687 +/- ##\n=========================================\n Coverage ? 67.14% \n=========================================\n Files ? 18 \n Lines ? 3847 \n Branches ? 0 \n=========================================\n Hits ? 2583 \n Misses ? 1264 \n Partials ? 0\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfZ3B0Mi5weQ==) | `79.39% <100%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=footer). Last update [cad88e1...44e9ddd](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=h1) Report\n> :exclamation: No coverage uploaded for pull request base (`master@cad88e1`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #687 +/- ##\n=========================================\n Coverage ? 67.14% \n=========================================\n Files ? 18 \n Lines ? 3847 \n Branches ? 0 \n=========================================\n Hits ? 2583 \n Misses ? 1264 \n Partials ? 0\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfZ3B0Mi5weQ==) | `79.39% <100%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=footer). Last update [cad88e1...44e9ddd](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/687?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,560 | 1,566 | 1,560 | MEMBER | null | - Fix GPT-2 test
- Update the documentation | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/687/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/687",
"html_url": "https://github.com/huggingface/transformers/pull/687",
"diff_url": "https://github.com/huggingface/transformers/pull/687.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/687.patch",
"merged_at": 1560525826000
} |
https://api.github.com/repos/huggingface/transformers/issues/686 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/686/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/686/comments | https://api.github.com/repos/huggingface/transformers/issues/686/events | https://github.com/huggingface/transformers/issues/686 | 456,188,107 | MDU6SXNzdWU0NTYxODgxMDc= | 686 | How to use GPT2 to predict and fit a word into an existing sentence? | {
"login": "padmalcom",
"id": 3961950,
"node_id": "MDQ6VXNlcjM5NjE5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/padmalcom",
"html_url": "https://github.com/padmalcom",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
"organizations_url": "https://api.github.com/users/padmalcom/orgs",
"repos_url": "https://api.github.com/users/padmalcom/repos",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"received_events_url": "https://api.github.com/users/padmalcom/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You would need an insertion-based transformer model like Google's recent KERMIT (http://arxiv.org/abs/1906.01604). But unfortunately, we currently don't have this model in the library.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,560 | 1,565 | 1,565 | NONE | null | Hi, I'd like to know if I can use GPT2 to decorate a simple sentence such as "Peter was sad because his sister had eaten all his candy." to get sth like "Tuesday morning the ten years old Peter was sitting in his room and was sad because his mean sister Clara had eaten all his tasty candy with her friends."
Using BERT I can use BertForMaskedLM to insert [MASK] tokens into my simple sentence but results are not very good (a lot of repetitions and words do not really fit in).
Since I heard GPT2 was better for text generation I'd now like to experiment with your fantastic library but I can not really find a starting point how to insert text in an existing text instead of (what all tutorials to) add it to the end of the sentence. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/686/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/685 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/685/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/685/comments | https://api.github.com/repos/huggingface/transformers/issues/685/events | https://github.com/huggingface/transformers/pull/685 | 456,117,213 | MDExOlB1bGxSZXF1ZXN0Mjg4MjI1NjQw | 685 | Add method to directly load TF Checkpoints for Bert models | {
"login": "chrisgzf",
"id": 4933577,
"node_id": "MDQ6VXNlcjQ5MzM1Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4933577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisgzf",
"html_url": "https://github.com/chrisgzf",
"followers_url": "https://api.github.com/users/chrisgzf/followers",
"following_url": "https://api.github.com/users/chrisgzf/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisgzf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisgzf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisgzf/subscriptions",
"organizations_url": "https://api.github.com/users/chrisgzf/orgs",
"repos_url": "https://api.github.com/users/chrisgzf/repos",
"events_url": "https://api.github.com/users/chrisgzf/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisgzf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I'm not convinced we need this additional option, see my [comment](https://github.com/huggingface/pytorch-pretrained-BERT/issues/676#issuecomment-502252962) in the associated issue thread.",
"As mentioned in, https://github.com/huggingface/pytorch-pretrained-BERT/issues/676#issuecomment-506134506, I recognise that directly loading TF checkpoints is a rather niche use case and will close this PR.",
"I don't think it's a niche but the `from_tf` option was there for importing from tf files so I would rather fix it to work in all cases rather than have several ways to import from a tf checkpoint.",
"I'll have a look."
] | 1,560 | 1,561 | 1,561 | CONTRIBUTOR | null | ## Summary
In this PR, I changed some documentation, and added `from_tf_ckpt()` method to `BertPreTrainedModel`.
This method allows users to directly load TensorFlow checkpoints (e.g. `model.ckpt-XXXX` files) for a task specific Bert model like `BertForTokenClassification` or `BertForSequenceClassification`.
**For example:**
```python
model = BertForSequenceClassification.from_tf_ckpt("/path/to/bert/bert_config.json",
"/path/to/bert/model.ckpt-12000",
num_labels=num_labels)
```
## Why this is needed:
This functionality has been requested by a number of people, like #676, https://github.com/huggingface/pytorch-pretrained-BERT/issues/676#issuecomment-501778493, #580, https://github.com/huggingface/pytorch-pretrained-BERT/issues/580#issuecomment-497286535, https://github.com/huggingface/pytorch-pretrained-BERT/issues/438#issuecomment-479405364 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/685/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/685",
"html_url": "https://github.com/huggingface/transformers/pull/685",
"diff_url": "https://github.com/huggingface/transformers/pull/685.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/685.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/684 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/684/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/684/comments | https://api.github.com/repos/huggingface/transformers/issues/684/events | https://github.com/huggingface/transformers/issues/684 | 456,019,938 | MDU6SXNzdWU0NTYwMTk5Mzg= | 684 | Implementation of 15% words masking in pretraining | {
"login": "jianyucai",
"id": 28853070,
"node_id": "MDQ6VXNlcjI4ODUzMDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28853070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jianyucai",
"html_url": "https://github.com/jianyucai",
"followers_url": "https://api.github.com/users/jianyucai/followers",
"following_url": "https://api.github.com/users/jianyucai/following{/other_user}",
"gists_url": "https://api.github.com/users/jianyucai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jianyucai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jianyucai/subscriptions",
"organizations_url": "https://api.github.com/users/jianyucai/orgs",
"repos_url": "https://api.github.com/users/jianyucai/repos",
"events_url": "https://api.github.com/users/jianyucai/events{/privacy}",
"received_events_url": "https://api.github.com/users/jianyucai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It should be fine. A bit of randomness in the pre-processing of the inputs is never bad when training a deep learning model.",
"> It should be fine. A bit of randomness in the pre-processing of the inputs is never bad when training a deep learning model.\r\n\r\nI found the same problem that the implementation is different from tensorflow. But the key point is not the fixed 15% prob of all the token. If we use the implementation of pytorch will produce two extreme case especially for short sentences like article title,usually 10-20 characters.\r\ncase 1. sentence with too much '[MASK]'\r\ncase 2. sentence with none '[MASK]'\r\nboth case1 and case2 would cause the drop of performance. case 1 make the model difficult to predict and case2 would not produce the loss.\r\nGiven a corpus with an average sentence length of 10. The implementation of tensorflow would generate 1 '[MASK]' for the sentences, but the implementation of pytorch would have :\r\n0.85^10 = 0.19 to generate 0 '[MASK]'\r\n0.15 * 0.85^9 * 10 =0.34 to generate 1 '[MASK]'\r\n0.15^2 * 0.85^8 * 45 =0.27 to generate 2 '[MASK]'\r\n0.15^2 * 0.85^7 * 120 =0.12 to generate 3 '[MAKS]'\r\n... \r\n\r\nIf we roughly consider the sentence with 15% '[MASK]' is appropriate, only 1/2 '[MASK]' is useful for training models. So only 0.34 + 0.27 = 0.61 training case is useful.\r\n\r\nAnd we found it is this is a very serious problem for short text.\r\n\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,560 | 1,568 | 1,568 | NONE | null | In the BERT paper, they randomly mask 15% words for pretraining, and that's exactly what they do in the TF version.
https://github.com/google-research/bert/blob/0fce551b55caabcfba52c61e18f34b541aef186a/create_pretraining_data.py#L342
However, the implementation here is a little bit different, instead of randomly select 15% tokens, it assigns a probability of 15% to each token, that is, each token has a probability of 15% to be masked. That means, each time we might have less than or more than 15% tokens masked.
So, is it correct to mask tokens with the expectation of 0.15 rather than fixed 15%?
https://github.com/huggingface/pytorch-pretrained-BERT/blob/f9cde97b313c3218e1b29ea73a42414dfefadb40/examples/lm_finetuning/simple_lm_finetuning.py#L276-L301 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/684/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/683 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/683/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/683/comments | https://api.github.com/repos/huggingface/transformers/issues/683/events | https://github.com/huggingface/transformers/pull/683 | 455,954,981 | MDExOlB1bGxSZXF1ZXN0Mjg4MDk4MDk5 | 683 | Fp16 | {
"login": "nschuc",
"id": 2816352,
"node_id": "MDQ6VXNlcjI4MTYzNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2816352?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nschuc",
"html_url": "https://github.com/nschuc",
"followers_url": "https://api.github.com/users/nschuc/followers",
"following_url": "https://api.github.com/users/nschuc/following{/other_user}",
"gists_url": "https://api.github.com/users/nschuc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nschuc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nschuc/subscriptions",
"organizations_url": "https://api.github.com/users/nschuc/orgs",
"repos_url": "https://api.github.com/users/nschuc/repos",
"events_url": "https://api.github.com/users/nschuc/events{/privacy}",
"received_events_url": "https://api.github.com/users/nschuc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,560 | 1,560 | 1,560 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/683/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/683",
"html_url": "https://github.com/huggingface/transformers/pull/683",
"diff_url": "https://github.com/huggingface/transformers/pull/683.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/683.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/682 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/682/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/682/comments | https://api.github.com/repos/huggingface/transformers/issues/682/events | https://github.com/huggingface/transformers/issues/682 | 455,859,694 | MDU6SXNzdWU0NTU4NTk2OTQ= | 682 | Can't find gpt2 vocab file. | {
"login": "suchithtuple",
"id": 50451555,
"node_id": "MDQ6VXNlcjUwNDUxNTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/50451555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suchithtuple",
"html_url": "https://github.com/suchithtuple",
"followers_url": "https://api.github.com/users/suchithtuple/followers",
"following_url": "https://api.github.com/users/suchithtuple/following{/other_user}",
"gists_url": "https://api.github.com/users/suchithtuple/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suchithtuple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suchithtuple/subscriptions",
"organizations_url": "https://api.github.com/users/suchithtuple/orgs",
"repos_url": "https://api.github.com/users/suchithtuple/repos",
"events_url": "https://api.github.com/users/suchithtuple/events{/privacy}",
"received_events_url": "https://api.github.com/users/suchithtuple/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I got the solution.\r\n"
] | 1,560 | 1,560 | 1,560 | NONE | null | When I run this
```
tokenizer = GPT2Tokenizer.from_pretrained(pretrained_model_name_or_path='gpt2',cache_dir=None)
```
I am getting this
```
Model name 'gpt2' was not found in model name list (gpt2). We assumed 'gpt2' was a path or url but couldn't find files https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json and https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt at this path or url.
```
Can anyone tell me where I am doing wrong. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/682/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/681 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/681/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/681/comments | https://api.github.com/repos/huggingface/transformers/issues/681/events | https://github.com/huggingface/transformers/issues/681 | 455,816,583 | MDU6SXNzdWU0NTU4MTY1ODM= | 681 | Can BertForMaskedLM be used to predict out-of-vocabulary words? | {
"login": "lcswillems",
"id": 5437552,
"node_id": "MDQ6VXNlcjU0Mzc1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5437552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lcswillems",
"html_url": "https://github.com/lcswillems",
"followers_url": "https://api.github.com/users/lcswillems/followers",
"following_url": "https://api.github.com/users/lcswillems/following{/other_user}",
"gists_url": "https://api.github.com/users/lcswillems/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lcswillems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lcswillems/subscriptions",
"organizations_url": "https://api.github.com/users/lcswillems/orgs",
"repos_url": "https://api.github.com/users/lcswillems/repos",
"events_url": "https://api.github.com/users/lcswillems/events{/privacy}",
"received_events_url": "https://api.github.com/users/lcswillems/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"What do yuu get when you multiply the probabilities for words in these 2 places. Probability for b times probability for \"##oa\".\r\n['[CLS]', 'This', 'is', 'a', 'picture', 'of', 'a', '[MASK]', '[MASK]', '.']\r\n['[CLS]', 'This', 'is', 'a', 'picture', 'of', 'a', '[MASK]', '##oa', '.']\r\n\r\nBut there is also whole word masking model is realized by Google team, I hope it will be added here soon. It mask all the parts of these words at the same time, so it would give a better accuracy (1% for some tasks).\r\nhttps://github.com/google-research/bert\r\n\r\n",
"Nothing interesting.\r\n\r\nMoreover, I would not like to mask words. I just want to get predictions with the word in clear.",
"Well, another solution is to recreate bert with a large vocabulary. This would take days on TPU.\r\n",
"Okay thanks for the answer",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,560 | 1,565 | 1,565 | NONE | null | Hi,
I have this text:
```
[CLS] This is a picture of a boa.
```
And would like to have the predictions of the `BertForMaskedLM` model for the word `boa`, without masking this word.
However, when I tokenize the text to give it to the network, I get:
```
['[CLS]', 'This', 'is', 'a', 'picture', 'of', 'a', 'b', '##oa', '.']
```
And the network gives me predictions for `b` and `##oa`. But nothing relevant. Could I get predictions for `boa`?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/681/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/680 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/680/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/680/comments | https://api.github.com/repos/huggingface/transformers/issues/680/events | https://github.com/huggingface/transformers/issues/680 | 455,627,186 | MDU6SXNzdWU0NTU2MjcxODY= | 680 | Limit on the input text length? | {
"login": "lcswillems",
"id": 5437552,
"node_id": "MDQ6VXNlcjU0Mzc1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5437552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lcswillems",
"html_url": "https://github.com/lcswillems",
"followers_url": "https://api.github.com/users/lcswillems/followers",
"following_url": "https://api.github.com/users/lcswillems/following{/other_user}",
"gists_url": "https://api.github.com/users/lcswillems/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lcswillems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lcswillems/subscriptions",
"organizations_url": "https://api.github.com/users/lcswillems/orgs",
"repos_url": "https://api.github.com/users/lcswillems/repos",
"events_url": "https://api.github.com/users/lcswillems/events{/privacy}",
"received_events_url": "https://api.github.com/users/lcswillems/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, 512 tokens for Bert.",
"Thank you :) ",
"Is there a way to bypass this limit? To increase the number of words?"
] | 1,560 | 1,560 | 1,560 | NONE | null | Hi,
I often get this error:
```
File "/miniconda3/envs/brightwater/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 268, in forward
position_embeddings = self.position_embeddings(position_ids)
File "/miniconda3/envs/brightwater/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/miniconda3/envs/brightwater/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 117, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/miniconda3/envs/brightwater/lib/python3.6/site-packages/torch/nn/functional.py", line 1506, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range at ../aten/src/TH/generic/THTensorEvenMoreMath.cpp:193
```
It only happens for long texts. It doesn't fail on chunks of a long text that is failing.
Is there a limitation on the length of the input text? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/680/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/679 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/679/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/679/comments | https://api.github.com/repos/huggingface/transformers/issues/679/events | https://github.com/huggingface/transformers/issues/679 | 455,615,467 | MDU6SXNzdWU0NTU2MTU0Njc= | 679 | Why the output of models are random. | {
"login": "sjcfr",
"id": 34537582,
"node_id": "MDQ6VXNlcjM0NTM3NTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/34537582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sjcfr",
"html_url": "https://github.com/sjcfr",
"followers_url": "https://api.github.com/users/sjcfr/followers",
"following_url": "https://api.github.com/users/sjcfr/following{/other_user}",
"gists_url": "https://api.github.com/users/sjcfr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sjcfr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sjcfr/subscriptions",
"organizations_url": "https://api.github.com/users/sjcfr/orgs",
"repos_url": "https://api.github.com/users/sjcfr/repos",
"events_url": "https://api.github.com/users/sjcfr/events{/privacy}",
"received_events_url": "https://api.github.com/users/sjcfr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"They won't be able to help you if you don't provide a code for reproducing your issue, as this is not an expected behaviour.",
"Thanks a lot for reminding. The issue is renewed with the code.",
"That's true! I can reproduce it also on my computer... Really weird!",
"You should use `model.eval()` to desactivate dropout like in the usage examples of the readme.",
"It solves the problem, thanks!\r\n\r\n"
] | 1,560 | 1,560 | 1,560 | NONE | null | I tried to get word representations using the full-retrained bert model for several times, whereas the outputs of model are different for a same word in each time. Did I neglect something? Not knowing the reason and asking for help sincerely.
The code is:
`from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM `
`model = BertModel.from_pretrained('bert-base-uncased')`
`x=torch.LongTensor([[6541]])`
`y0=model(x)[0]`
`y1=model(x)[0]`
In theory, y0 should be equal to y1. However, they are different.
Both the length of y0 and y1 is 12, in accordance to the 12 layers of 'bert-base-uncased' model. However, each 12 element of y0 and y1 are different.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/679/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/678 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/678/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/678/comments | https://api.github.com/repos/huggingface/transformers/issues/678/events | https://github.com/huggingface/transformers/issues/678 | 455,422,146 | MDU6SXNzdWU0NTU0MjIxNDY= | 678 | Transformer XL ProjectedAdaptiveLogSoftmax bug (maybe?) | {
"login": "shashwath94",
"id": 7631779,
"node_id": "MDQ6VXNlcjc2MzE3Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7631779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shashwath94",
"html_url": "https://github.com/shashwath94",
"followers_url": "https://api.github.com/users/shashwath94/followers",
"following_url": "https://api.github.com/users/shashwath94/following{/other_user}",
"gists_url": "https://api.github.com/users/shashwath94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shashwath94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shashwath94/subscriptions",
"organizations_url": "https://api.github.com/users/shashwath94/orgs",
"repos_url": "https://api.github.com/users/shashwath94/repos",
"events_url": "https://api.github.com/users/shashwath94/events{/privacy}",
"received_events_url": "https://api.github.com/users/shashwath94/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes! We don't see that when we use the pre-trained model because the number of clusters is greater than zero anyway. Will fix.",
"Thank you. I created a PR since it was a small bug. #690 "
] | 1,560 | 1,560 | 1,560 | CONTRIBUTOR | null | In <a href="https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling_transfo_xl_utilities.py#L120">this line</a>, shouldn't the output be assigned to `out` when `n_clusters` is 0? Otherwise we run into `UnboundLocalError: local variable 'out' referenced before assignment` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/678/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/677 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/677/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/677/comments | https://api.github.com/repos/huggingface/transformers/issues/677/events | https://github.com/huggingface/transformers/issues/677 | 455,296,243 | MDU6SXNzdWU0NTUyOTYyNDM= | 677 | Download the model without executing a Python script | {
"login": "lcswillems",
"id": 5437552,
"node_id": "MDQ6VXNlcjU0Mzc1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5437552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lcswillems",
"html_url": "https://github.com/lcswillems",
"followers_url": "https://api.github.com/users/lcswillems/followers",
"following_url": "https://api.github.com/users/lcswillems/following{/other_user}",
"gists_url": "https://api.github.com/users/lcswillems/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lcswillems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lcswillems/subscriptions",
"organizations_url": "https://api.github.com/users/lcswillems/orgs",
"repos_url": "https://api.github.com/users/lcswillems/repos",
"events_url": "https://api.github.com/users/lcswillems/events{/privacy}",
"received_events_url": "https://api.github.com/users/lcswillems/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Is this what you want?\r\n\r\n```python\r\nPRETRAINED_MODEL_ARCHIVE_MAP = {\r\n 'bert-base-uncased': \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz\",\r\n 'bert-large-uncased': \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased.tar.gz\",\r\n 'bert-base-cased': \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased.tar.gz\",\r\n 'bert-large-cased': \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased.tar.gz\",\r\n 'bert-base-multilingual-uncased': \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased.tar.gz\",\r\n 'bert-base-multilingual-cased': \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased.tar.gz\",\r\n 'bert-base-chinese': \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese.tar.gz\",\r\n}\r\n```\r\n\r\nfrom https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py\r\n\r\nI feel like I'm not exactly understanding your question. They have hosted the pytorch dumps of the pretrained BERT models that were released by Google and hosted them on AWS. After downloading the model, what do you wish to do with it?",
"Thank you for the answer.\r\n\r\nSo with your code, I now have the url. But, where should I put the files in my filesystem?\r\n\r\nSpacy provide a command to download the weights, put them at the good location in the filesystem, and use them. Is there an equivalent in your repository?\r\n\r\nFor example, [DeepPavlov](https://github.com/deepmipt/DeepPavlov) provides this command:\r\n\r\n```\r\npython -m deeppavlov install -d squad_bert\r\n```\r\n\r\nto install and download a model.",
"Does my question make sense?",
"Not really. What is the reason you are trying to do that?\r\nThis library will automatically download pre-trained weights, you don't need to do that yourself (even though you can).",
"> Not really.\r\n\r\nDo you understand what Spacy & Deeppavlov enable to do? If yes, I am asking if there is something similar here. But, if you don't understand, it surely means it is not possible.\r\n\r\n> What is the reason you are trying to do that?\r\n\r\nBecause, when I put my code in production, I don't want to make the first query very long because it has to download the model.",
"I happen to know quite well SpaCy (if you look at the Huggingface github, you will see we have developed a coreference resolution extension for SpaCy, [NeuralCoref](https://github.com/huggingface/neuralcoref), which interfaces directly with the cython internals of SpaCy) so I know the download process they use which is there mainly because they need the models to install as python packages (which we don't need to do here).\r\n\r\nYou actually shouldn't have to do anything special to avoid a long first query (and we don't do anything special at HuggingFace with the model in production) for the following reason:\r\nThe model weights are downloaded and cached when you instantiated the model for the first time and this should be done before the first query is even received. If you create and load the model at each query, you will experience a very heavy overhead, you should avoid that.\r\n\r\nIf you want to download the weights yourself (you can also do that) you will need to download the weights, configuration and vocabulary files manually from the url that @chrisgzf has pointed and put these in a folder. You can then load the model and tokenizer from that folder as indicated in the readme.",
"Okay, thank you for your detailed and interesting answer. So there is not the feature I was asking for.\r\n\r\nAnd I miswrote: I didn't want to say \"I don't want to make the first query very long\", but rather \"I don't want to make the first server start very long\" for some reasons."
] | 1,560 | 1,560 | 1,560 | NONE | null | Hi,
Is there a command to download a model (e.g. BertForMaskedLM) without having to execute a Python script?
For example, in Spacy, we can do `python -m spacy download en`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/677/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/676 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/676/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/676/comments | https://api.github.com/repos/huggingface/transformers/issues/676/events | https://github.com/huggingface/transformers/issues/676 | 455,135,026 | MDU6SXNzdWU0NTUxMzUwMjY= | 676 | Importing TF checkpoint as BertForTokenClassificiation | {
"login": "chrisgzf",
"id": 4933577,
"node_id": "MDQ6VXNlcjQ5MzM1Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4933577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisgzf",
"html_url": "https://github.com/chrisgzf",
"followers_url": "https://api.github.com/users/chrisgzf/followers",
"following_url": "https://api.github.com/users/chrisgzf/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisgzf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisgzf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisgzf/subscriptions",
"organizations_url": "https://api.github.com/users/chrisgzf/orgs",
"repos_url": "https://api.github.com/users/chrisgzf/repos",
"events_url": "https://api.github.com/users/chrisgzf/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisgzf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello everyone,\r\n\r\nI have temporarily come up with a workaround for this. Not sure if it's the best solution but it works. What I did was I essentially merged what `load_tf_weights_in_bert()` and what part of `BertPreTrainedModel.from_pretrained()` was doing. `BertPreTrainedModel` is the parent class of `BertForTokenClassification`, so if you are trying to do something similar for `BertFor{TaskName}`, it should work too.\r\n\r\n```python\r\ndef load_BFTC_from_TF_ckpt(bert_config, ckpt_path, num_labels):\r\n config = BertConfig.from_json_file(bert_config)\r\n model = BertForPreTraining(config)\r\n load_tf_weights_in_bert(model, ckpt_path)\r\n state_dict=model.state_dict()\r\n model = BertForTokenClassification(config, num_labels=num_labels)\r\n\r\n # Load from a PyTorch state_dict\r\n old_keys = []\r\n new_keys = []\r\n for key in state_dict.keys():\r\n new_key = None\r\n if 'gamma' in key:\r\n new_key = key.replace('gamma', 'weight')\r\n if 'beta' in key:\r\n new_key = key.replace('beta', 'bias')\r\n if new_key:\r\n old_keys.append(key)\r\n new_keys.append(new_key)\r\n for old_key, new_key in zip(old_keys, new_keys):\r\n state_dict[new_key] = state_dict.pop(old_key)\r\n\r\n missing_keys = []\r\n unexpected_keys = []\r\n error_msgs = []\r\n # copy state_dict so _load_from_state_dict can modify it\r\n metadata = getattr(state_dict, '_metadata', None)\r\n state_dict = state_dict.copy()\r\n if metadata is not None:\r\n state_dict._metadata = metadata\r\n\r\n def load(module, prefix=''):\r\n local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})\r\n module._load_from_state_dict(\r\n state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)\r\n for name, child in module._modules.items():\r\n if child is not None:\r\n load(child, prefix + name + '.')\r\n start_prefix = ''\r\n if not hasattr(model, 'bert') and any(s.startswith('bert.') for s in state_dict.keys()):\r\n start_prefix = 'bert.'\r\n load(model, prefix=start_prefix)\r\n if len(missing_keys) > 0:\r\n logger.info(\"Weights of {} not initialized from pretrained model: {}\".format(\r\n model.__class__.__name__, missing_keys))\r\n if len(unexpected_keys) > 0:\r\n logger.info(\"Weights from pretrained model not used in {}: {}\".format(\r\n model.__class__.__name__, unexpected_keys))\r\n if len(error_msgs) > 0:\r\n raise RuntimeError('Error(s) in loading state_dict for {}:\\n\\t{}'.format(\r\n model.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\n return model\r\n\r\nmodel = load_BFTC_from_TF_ckpt(CONFIG_FILE, \"/home/bert/pt_baseuncased/model.ckpt-98000\", num_labels)\r\n```",
"Hi chrisgzf, thank you for the quick fix. I saw it's been integrated in the latest version.\r\n\r\nI am trying to do quite the same using **BertForSequenceClassification** but the \"**AttributeError**: 'BertForTokenClassification' object has no attribute 'bias'\" still shows up.\r\n\r\nAny ideas how I could use your fix for BertForSequenceClassification too ?\r\n\r\nThank you",
"Hi @stormskidd, \r\n\r\nin my code snippet here (https://github.com/huggingface/pytorch-pretrained-BERT/issues/676#issuecomment-501526327), just change\r\n\r\n`model = BertForTokenClassification(config, num_labels=num_labels)`\r\nto\r\n`model = BertForSequenceClassification(config, num_labels=num_labels)`\r\n\r\nand it _should_ work.",
"Thank you for the quick response.\r\nSo I tried this as you said :\r\n\r\n```\r\ndef load_BFTC_from_TF_ckpt(bert_config, ckpt_path, num_labels):\r\n config = BertConfig.from_json_file(bert_config)\r\n model = BertForSequenceClassification(config, num_labels=num_labels)\r\n load_tf_weights_in_bert(model, ckpt_path)\r\n state_dict=model.state_dict()\r\n model = **BertForSequenceClassification**(config, num_labels=num_labels)\r\n\r\n # Load from a PyTorch state_dict\r\n old_keys = []\r\n new_keys = []\r\n for key in state_dict.keys():\r\n new_key = None\r\n if 'gamma' in key:\r\n new_key = key.replace('gamma', 'weight')\r\n if 'beta' in key:\r\n new_key = key.replace('beta', 'bias')\r\n if new_key:\r\n old_keys.append(key)\r\n new_keys.append(new_key)\r\n for old_key, new_key in zip(old_keys, new_keys):\r\n state_dict[new_key] = state_dict.pop(old_key)\r\n\r\n missing_keys = []\r\n unexpected_keys = []\r\n error_msgs = []\r\n # copy state_dict so _load_from_state_dict can modify it\r\n metadata = getattr(state_dict, '_metadata', None)\r\n state_dict = state_dict.copy()\r\n if metadata is not None:\r\n state_dict._metadata = metadata\r\n\r\n def load(module, prefix=''):\r\n local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})\r\n module._load_from_state_dict(\r\n state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)\r\n for name, child in module._modules.items():\r\n if child is not None:\r\n load(child, prefix + name + '.')\r\n start_prefix = ''\r\n if not hasattr(model, 'bert') and any(s.startswith('bert.') for s in state_dict.keys()):\r\n start_prefix = 'bert.'\r\n load(model, prefix=start_prefix)\r\n if len(missing_keys) > 0:\r\n logger.info(\"Weights of {} not initialized from pretrained model: {}\".format(\r\n model.__class__.__name__, missing_keys))\r\n if len(unexpected_keys) > 0:\r\n logger.info(\"Weights from pretrained model not used in {}: {}\".format(\r\n model.__class__.__name__, unexpected_keys))\r\n if len(error_msgs) > 0:\r\n raise RuntimeError('Error(s) in loading state_dict for {}:\\n\\t{}'.format(\r\n model.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\n return model\r\n```\r\n\r\nThen:\r\n\r\n```\r\nCONFIG_FILE = \"Bert/multi_cased_L-12_H-768_A-12/bert_config.json\"\r\nmodel = load_BFTC_from_TF_ckpt(CONFIG_FILE, \"model.ckpt-6032\", num_labels = 2)\r\n```\r\n\r\n\r\nBut still got the error:\r\n`\r\n<ipython-input-2-8b2ad52bd838> in load_BFTC_from_TF_ckpt(bert_config, ckpt_path, num_labels)\r\n 2 config = BertConfig.from_json_file(bert_config)\r\n 3 model = BertForSequenceClassification(config, num_labels=num_labels)\r\n----> 4 load_tf_weights_in_bert(model, ckpt_path)\r\n 5 state_dict=model.state_dict()\r\n 6 model = BertForSequenceClassification(config, num_labels=num_labels)\r\n\r\nC:\\ProgramData\\Anaconda3\\Lib\\site-packages\\pytorch_pretrained_bert\\modeling.py in load_tf_weights_in_bert(model, tf_checkpoint_path)\r\n 89 \r\n 90 elif l[0] == 'output_bias' or l[0] == 'beta':\r\n---> 91 pointer = getattr(pointer, 'bias')\r\n 92 \r\n 93 elif l[0] == 'output_weights':\r\n\r\nC:\\ProgramData\\Anaconda3\\Lib\\site-packages\\torch\\nn\\modules\\module.py in __getattr__(self, name)\r\n 537 return modules[name]\r\n 538 raise AttributeError(\"'{}' object has no attribute '{}'\".format(\r\n--> 539 type(self).__name__, name))\r\n 540 \r\n 541 def __setattr__(self, name, value):\r\n\r\nAttributeError: 'BertForSequenceClassification' object has no attribute 'bias''`\r\n\r\n\r\n\r\n\r\nAlso, I wandered in the source code where the error occurs :\r\n```\r\n\r\n 90 elif l[0] == 'output_bias' or l[0] == 'beta':\r\n---> 91 pointer = getattr(pointer, 'bias')\r\n```\r\n\r\n\r\nI tried to change getattr(pointer, 'bias') to getattr(pointer, 'beta') but then I got a slightly different error:\r\n\r\n```\r\nC:\\ProgramData\\Anaconda3\\Lib\\site-packages\\pytorch_pretrained_bert\\modeling.py in load_tf_weights_in_bert(model, tf_checkpoint_path)\r\n 89 \r\n 90 elif l[0] == 'output_bias' or l[0] == 'beta':\r\n---> 91 pointer = getattr(pointer, 'beta')\r\n 92 \r\n 93 elif l[0] == 'output_weights':\r\n\r\nC:\\ProgramData\\Anaconda3\\Lib\\site-packages\\torch\\nn\\modules\\module.py in __getattr__(self, name)\r\n 537 return modules[name]\r\n 538 raise AttributeError(\"'{}' object has no attribute '{}'\".format(\r\n--> 539 type(self).__name__, name))\r\n 540 \r\n 541 def __setattr__(self, name, value):\r\n\r\nAttributeError: 'BertLayerNorm' object has no attribute 'beta'\r\n```\r\n\r\nHope it helps. Please let me know you think of any workaround for this !\r\nBy the way, I'm on windows using Anaconda with Python 3.7.1\r\n\r\nGreetings,\r\nMaxime",
"Hi Maxime (@stormskidd),\r\n\r\nDo read my comment carefully. Change\r\n\r\n`model = BertForTokenClassification(config, num_labels=num_labels)`\r\nto\r\n`model = BertForSequenceClassification(config, num_labels=num_labels)`\r\n\r\nPlease leave `model = BertForPreTraining(config)` (line 3) as is, and do not change it.\r\n\r\nEdit: you might want to check out #685 as well. I submitted a PR to make it easier to do something we are trying to do. Maybe the code examples will make it clearer to you.\r\n\r\nIn line 3 of the function body, the model has to be an instance of `BertForPreTraining` because that's what `load_tf_weights_in_bert()` is designed to work with. I'm not sure if what I'm doing is the proper way or just a jank way, but I'm just trying to copy out the state_dict after the weights are imported as a `BertForPreTraining` instances, then creating a brand new `BertFor[TaskName]` instance, and loading the state_dict into it.\r\n\r\nHope this clears things up.\r\n\r\nCheers.",
"Oh, you're right I posted the wrong version of the many things I tried :(\r\n\r\nAnyway, I'm afraid the error still shows up with the following:\r\n\r\n```\r\ndef load_BFTC_from_TF_ckpt(bert_config, ckpt_path, num_labels):\r\n config = BertConfig.from_json_file(bert_config)\r\n model = BertForPreTraining(config)\r\n load_tf_weights_in_bert(model, ckpt_path)\r\n state_dict=model.state_dict()\r\n model = BertForSequenceClassification(config, num_labels=num_labels)\r\n\r\n # Load from a PyTorch state_dict\r\n old_keys = []\r\n new_keys = []\r\n for key in state_dict.keys():\r\n new_key = None\r\n if 'gamma' in key:\r\n new_key = key.replace('gamma', 'weight')\r\n if 'beta' in key:\r\n new_key = key.replace('beta', 'bias')\r\n if new_key:\r\n old_keys.append(key)\r\n new_keys.append(new_key)\r\n for old_key, new_key in zip(old_keys, new_keys):\r\n state_dict[new_key] = state_dict.pop(old_key)\r\n\r\n missing_keys = []\r\n unexpected_keys = []\r\n error_msgs = []\r\n # copy state_dict so _load_from_state_dict can modify it\r\n metadata = getattr(state_dict, '_metadata', None)\r\n state_dict = state_dict.copy()\r\n if metadata is not None:\r\n state_dict._metadata = metadata\r\n\r\n def load(module, prefix=''):\r\n local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})\r\n module._load_from_state_dict(\r\n state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)\r\n for name, child in module._modules.items():\r\n if child is not None:\r\n load(child, prefix + name + '.')\r\n start_prefix = ''\r\n if not hasattr(model, 'bert') and any(s.startswith('bert.') for s in state_dict.keys()):\r\n start_prefix = 'bert.'\r\n load(model, prefix=start_prefix)\r\n if len(missing_keys) > 0:\r\n logger.info(\"Weights of {} not initialized from pretrained model: {}\".format(\r\n model.__class__.__name__, missing_keys))\r\n if len(unexpected_keys) > 0:\r\n logger.info(\"Weights from pretrained model not used in {}: {}\".format(\r\n model.__class__.__name__, unexpected_keys))\r\n if len(error_msgs) > 0:\r\n raise RuntimeError('Error(s) in loading state_dict for {}:\\n\\t{}'.format(\r\n model.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\n return model\r\n```\r\n\r\nThen:\r\n\r\n\r\n```\r\nCONFIG_FILE = \"/Bert/multi_cased_L-12_H-768_A-12/bert_config.json\"\r\nmodel = load_BFTC_from_TF_ckpt(CONFIG_FILE, \"model.ckpt-6032\", num_labels = 2)\r\n```\r\n\r\n\r\nAnd :\r\n```\r\nC:\\ProgramData\\Anaconda3\\Lib\\site-packages\\pytorch_pretrained_bert\\modeling.py in load_tf_weights_in_bert(model, tf_checkpoint_path)\r\n 89 \r\n 90 elif l[0] == 'output_bias' or l[0] == 'beta':\r\n---> 91 pointer = getattr(pointer, 'bias')\r\n 92 \r\n 93 elif l[0] == 'output_weights':\r\n\r\nC:\\ProgramData\\Anaconda3\\Lib\\site-packages\\torch\\nn\\modules\\module.py in __getattr__(self, name)\r\n 537 return modules[name]\r\n 538 raise AttributeError(\"'{}' object has no attribute '{}'\".format(\r\n--> 539 type(self).__name__, name))\r\n 540 \r\n 541 def __setattr__(self, name, value):\r\n\r\nAttributeError: 'BertForPreTraining' object has no attribute 'bias'\r\n```\r\n\r\nGreetings,\r\nMax",
"@stormskidd,\r\n\r\nthis is odd.... I actually specifically tested my code above on `BertForSequenceClassification` as well and I was able to successfully import TF weights. The code snippet looks like it would work...\r\n\r\nJust want to check:\r\n- are you running the latest pytorch-pretrained-BERT from upstream master?\r\n- did you make any other changes to the source?\r\n- have you tried https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py and successfully converted your TF ckpt to a pytorch dump? if my code snippet above doesn't work for you, this should at least work for you.\r\n- not sure if this matters, but are you using CUDA? and do you have apex installed?\r\n\r\nDo let me know if you face any errors. I'm curious about this issue. Btw, check out my edit in the previous comment.\r\n\r\nChris",
"Hi, not sure I fully understand the issue here. What is the kind of tensorflow checkpoint you are trying to convert? Is it a pretrained model like the original Bert checkpoints or is it a fine-tuned model with additional elements (like a classification layer on top)?",
"Hi @thomwolf,\r\n\r\nI am trying to convert a pretrained model like the original Bert checkpoints, except that I did additional pretraining on top of the released models with `run_pretraining.py` from the BERT repo. I then wish to fine-tune these pretrained models in pytorch, which is why I had to do this conversion. I am not importing any fine-tuned TF models.",
"Ok so you can just convert your Tensorflow model using the command line script (see [here in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#Command-line-interface)) store the converted pytorch model in a folder with the configuration file and then load it in a `BertForTokenClassification` model as follow:\r\n`BertForTokenClassification.from_pretrained('PATH_TO_YOUR_CONVERTED_MODEL_FOLDER')`\r\n\r\nFor the tokenizer, you can use the one associated to the original TensorFlow model from which you did the fine-tuning since you probably didn't change the vocabulary itself.",
"Hi guys,\r\nI saw your discussion and it gave me the idea to try the following:\r\nIt seems like I can use both the function 'load_BFTC_from_TF_ckpt' and the script 'convert_tf_checkpoint_to_pytorch.py' to load a pretrained Google model : multi_cased_L-12_H-768_A-12\\bert_model.ckpt.\r\n\r\nHowever the error occurs when I tried to do the same with a fined-tuned model (from Bert script run_classifier.py ran on GluON/MRPC -like data).\r\nIs there any ways I can load a TF checkpoint fine-tuned model directly in Pytorch ? Or do I have to re-finetune it with the pytorch_pretrained_bert library ?\r\n\r\nThank you !\r\nMax",
"> Oh, you're right I posted the wrong version of the many things I tried :(\r\n> \r\n> Anyway, I'm afraid the error still shows up with the following:\r\n> \r\n> ```\r\n> def load_BFTC_from_TF_ckpt(bert_config, ckpt_path, num_labels):\r\n> config = BertConfig.from_json_file(bert_config)\r\n> model = BertForPreTraining(config)\r\n> load_tf_weights_in_bert(model, ckpt_path)\r\n> state_dict=model.state_dict()\r\n> model = BertForSequenceClassification(config, num_labels=num_labels)\r\n> \r\n> # Load from a PyTorch state_dict\r\n> old_keys = []\r\n> new_keys = []\r\n> for key in state_dict.keys():\r\n> new_key = None\r\n> if 'gamma' in key:\r\n> new_key = key.replace('gamma', 'weight')\r\n> if 'beta' in key:\r\n> new_key = key.replace('beta', 'bias')\r\n> if new_key:\r\n> old_keys.append(key)\r\n> new_keys.append(new_key)\r\n> for old_key, new_key in zip(old_keys, new_keys):\r\n> state_dict[new_key] = state_dict.pop(old_key)\r\n> \r\n> missing_keys = []\r\n> unexpected_keys = []\r\n> error_msgs = []\r\n> # copy state_dict so _load_from_state_dict can modify it\r\n> metadata = getattr(state_dict, '_metadata', None)\r\n> state_dict = state_dict.copy()\r\n> if metadata is not None:\r\n> state_dict._metadata = metadata\r\n> \r\n> def load(module, prefix=''):\r\n> local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})\r\n> module._load_from_state_dict(\r\n> state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)\r\n> for name, child in module._modules.items():\r\n> if child is not None:\r\n> load(child, prefix + name + '.')\r\n> start_prefix = ''\r\n> if not hasattr(model, 'bert') and any(s.startswith('bert.') for s in state_dict.keys()):\r\n> start_prefix = 'bert.'\r\n> load(model, prefix=start_prefix)\r\n> if len(missing_keys) > 0:\r\n> logger.info(\"Weights of {} not initialized from pretrained model: {}\".format(\r\n> model.__class__.__name__, missing_keys))\r\n> if len(unexpected_keys) > 0:\r\n> logger.info(\"Weights from pretrained model not used in {}: {}\".format(\r\n> model.__class__.__name__, unexpected_keys))\r\n> if len(error_msgs) > 0:\r\n> raise RuntimeError('Error(s) in loading state_dict for {}:\\n\\t{}'.format(\r\n> model.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\n> return model\r\n> ```\r\n> \r\n> Then:\r\n> \r\n> ```\r\n> CONFIG_FILE = \"/Bert/multi_cased_L-12_H-768_A-12/bert_config.json\"\r\n> model = load_BFTC_from_TF_ckpt(CONFIG_FILE, \"model.ckpt-6032\", num_labels = 2)\r\n> ```\r\n> \r\n> And :\r\n> \r\n> ```\r\n> C:\\ProgramData\\Anaconda3\\Lib\\site-packages\\pytorch_pretrained_bert\\modeling.py in load_tf_weights_in_bert(model, tf_checkpoint_path)\r\n> 89 \r\n> 90 elif l[0] == 'output_bias' or l[0] == 'beta':\r\n> ---> 91 pointer = getattr(pointer, 'bias')\r\n> 92 \r\n> 93 elif l[0] == 'output_weights':\r\n> \r\n> C:\\ProgramData\\Anaconda3\\Lib\\site-packages\\torch\\nn\\modules\\module.py in __getattr__(self, name)\r\n> 537 return modules[name]\r\n> 538 raise AttributeError(\"'{}' object has no attribute '{}'\".format(\r\n> --> 539 type(self).__name__, name))\r\n> 540 \r\n> 541 def __setattr__(self, name, value):\r\n> \r\n> AttributeError: 'BertForPreTraining' object has no attribute 'bias'\r\n> ```\r\n> \r\n> Greetings,\r\n> Max\r\n\r\nyou can try:\r\nmodel = BertForPreTraining.from_pretrained(BERT_DIR, from_tf=True)",
"Hello guys,\r\nhere is something I found that seem to do the trick for now:\r\n\r\nin `modeling.py:`\r\n\r\nIn class `class BertForSequenceClassification(BertPreTrainedModel):`\r\n\r\nAdd :\r\n```\r\n self.weight = Variable(torch.ones(2, config.hidden_size), requires_grad=True) \r\n self.bias = Variable(torch.ones(2), requires_grad=True)\r\n```\r\nto the attributes.\r\n\r\nObviously juste change the right class regarding your needs. I'll let you know if it provokes another error, but for now I can load in memory the trained model I couldn't load before.\r\n\r\nGreetings,\r\nMax\r\n",
"if use BertFor* class, will not initiate classifier layer.\r\nwhen i load tf model for predict/evaluate modify **load_tf_weights_in_bert**\r\n\r\n\r\n```\r\n if re.fullmatch(r'[A-Za-z]+_\\d+', m_name):\r\n l = re.split(r'_(\\d+)', m_name)\r\n else:\r\n l = [m_name]\r\n if l[0] == 'kernel' or l[0] == 'gamma':\r\n pointer = getattr(pointer, 'weight')\r\n elif l[0] == 'output_bias' or l[0] == 'beta':\r\n if pointer == model:\r\n pointer = getattr(pointer, 'classifier')\r\n pointer = getattr(pointer, 'bias')\r\n elif l[0] == 'output_weights':\r\n if pointer == model:\r\n pointer = getattr(pointer, 'classifier')\r\n pointer = getattr(pointer, 'weight')\r\n elif l[0] == 'squad':\r\n pointer = getattr(pointer, 'classifier')\r\n```\r\n",
"> \r\n> \r\n> Ok so you can just convert your Tensorflow model using the command line script (see [here in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#Command-line-interface)) store the converted pytorch model in a folder with the configuration file and then load it in a `BertForTokenClassification` model as follow:\r\n> `BertForTokenClassification.from_pretrained('PATH_TO_YOUR_CONVERTED_MODEL_FOLDER')`\r\n> \r\n> For the tokenizer, you can use the one associated to the original TensorFlow model from which you did the fine-tuning since you probably didn't change the vocabulary itself.\r\n\r\nHello @thomwolf,\r\n\r\nYes, I am aware of the conversion script and `from_pretrained()` being able to load full models (PyTorch dumps) from converted TF checkpoints. However, in my use case, I did pre-training with BERT using the script `run_pretraining.py` from the BERT repo, and I wanted to do fine-tuning on the many checkpoint steps that I have saved, so it would make more sense for me to load the checkpoints directly.\r\n\r\nHowever, I am aware that my use case is a very niche one, and the others here are talking about a different use case (loading TF finetuned models). Since my issue has been resolved, I will close this issue."
] | 1,560 | 1,561 | 1,561 | CONTRIBUTOR | null | Hello Everyone,
I've been stuck with trying to load TensorFlow checkpoints to be used by `pytorch-pretrained-bert` as `BertForTokenClassification`.
**pytorch-pretrained-BERT Version:** Installed from latest master branch.
**What works:**
```python
config = BertConfig.from_json_file(CONFIG_FILE)
model = BertForPreTraining(config)
model = load_tf_weights_in_bert(model, "/home/bert/pt_baseuncased/model.ckpt-98000")
```
**What I want to do:**
```python
config = BertConfig.from_json_file(CONFIG_FILE)
model = BertForTokenClassification(config, num_labels=num_labels)
# the difference is BertForTokenClassification instead of BertForPreTraining
model = load_tf_weights_in_bert(model, "/home/bert/pt_baseuncased/model.ckpt-98000")
```
**When I try to do this it gives me:**
```python
AttributeError: 'BertForTokenClassification' object has no attribute 'bias'
```
**Full Traceback:**
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-a8225a5966f7> in <module>
1 config = BertConfig.from_json_file(CONFIG_FILE)
2 model = BertForTokenClassification(config, num_labels=10)
----> 3 model = load_tf_weights_in_bert(model, "/home/gzhenfun/bert/pt_baseuncased/model.ckpt-98000")
~/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py in load_tf_weights_in_bert(model, tf_checkpoint_path)
88 pointer = getattr(pointer, 'weight')
89 elif l[0] == 'output_bias' or l[0] == 'beta':
---> 90 pointer = getattr(pointer, 'bias')
91 elif l[0] == 'output_weights':
92 pointer = getattr(pointer, 'weight')
~/anaconda3/envs/bert/lib/python3.6/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
533 return modules[name]
534 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 535 type(self).__name__, name))
536
537 def __setattr__(self, name, value):
AttributeError: 'BertForTokenClassification' object has no attribute 'bias'
```
**To try to resolve this:**
I followed https://github.com/huggingface/pytorch-pretrained-BERT/issues/580#issuecomment-489519231 from #580, and changed my `modeling.py` to this:
```python
pointer = model
for m_name in name:
if re.fullmatch(r'[A-Za-z]+_\d+', m_name):
l = re.split(r'_(\d+)', m_name)
else:
l = [m_name]
if l[0] == 'kernel' or l[0] == 'gamma':
pointer = getattr(pointer, 'weight')
elif l[0] == 'output_bias' or l[0] == 'beta':
pointer = getattr(pointer, 'cls')
# added the line above
pointer = getattr(pointer, 'bias')
elif l[0] == 'output_weights':
pointer = getattr(pointer, 'cls')
# added the line above
pointer = getattr(pointer, 'weight')
elif l[0] == 'squad':
pointer = getattr(pointer, 'classifier')
else:
try:
pointer = getattr(pointer, l[0])
except AttributeError:
print("Skipping {}".format("/".join(name)))
continue
if len(l) >= 2:
num = int(l[1])
pointer = pointer[num]
```
However, that gives me:
```python
AttributeError: 'FusedLayerNorm' object has no attribute 'cls'
```
Anybody here knows how I can fix this and properly import a TF checkpoint as `BertForTokenClassification`?
Will appreciate any help. Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/676/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/676/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/675 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/675/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/675/comments | https://api.github.com/repos/huggingface/transformers/issues/675/events | https://github.com/huggingface/transformers/pull/675 | 454,927,165 | MDExOlB1bGxSZXF1ZXN0Mjg3MjgxNDk1 | 675 | [hotfix] Fix frozen pooler parameters in SWAG example. | {
"login": "meetps",
"id": 6251729,
"node_id": "MDQ6VXNlcjYyNTE3Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6251729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meetps",
"html_url": "https://github.com/meetps",
"followers_url": "https://api.github.com/users/meetps/followers",
"following_url": "https://api.github.com/users/meetps/following{/other_user}",
"gists_url": "https://api.github.com/users/meetps/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meetps/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meetps/subscriptions",
"organizations_url": "https://api.github.com/users/meetps/orgs",
"repos_url": "https://api.github.com/users/meetps/repos",
"events_url": "https://api.github.com/users/meetps/events{/privacy}",
"received_events_url": "https://api.github.com/users/meetps/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @meetshah1995 "
] | 1,560 | 1,560 | 1,560 | CONTRIBUTOR | null | Hotfix for #461 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/675/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/675",
"html_url": "https://github.com/huggingface/transformers/pull/675",
"diff_url": "https://github.com/huggingface/transformers/pull/675.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/675.patch",
"merged_at": 1560326482000
} |
https://api.github.com/repos/huggingface/transformers/issues/674 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/674/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/674/comments | https://api.github.com/repos/huggingface/transformers/issues/674/events | https://github.com/huggingface/transformers/issues/674 | 454,870,078 | MDU6SXNzdWU0NTQ4NzAwNzg= | 674 | Gradual unfreezing and discriminative fine-tuning for BERT | {
"login": "dchang56",
"id": 24575558,
"node_id": "MDQ6VXNlcjI0NTc1NTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/24575558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dchang56",
"html_url": "https://github.com/dchang56",
"followers_url": "https://api.github.com/users/dchang56/followers",
"following_url": "https://api.github.com/users/dchang56/following{/other_user}",
"gists_url": "https://api.github.com/users/dchang56/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dchang56/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dchang56/subscriptions",
"organizations_url": "https://api.github.com/users/dchang56/orgs",
"repos_url": "https://api.github.com/users/dchang56/repos",
"events_url": "https://api.github.com/users/dchang56/events{/privacy}",
"received_events_url": "https://api.github.com/users/dchang56/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I've tried a bit to play with these training schemes on a deep transformer for our [tutorial on Transfer Learning in Natural Language Processing](https://naacl2019.org/program/tutorials/#t4-transfer-learning-in-natural-language-processing) held at NAACL last week but I couldn't get gradual unfreezing and discriminative fine-tuning to out-perform a standard fine-tuning procedure (multi-tasking did help, however).\r\n\r\nYou can have a look at the results by reading the \"Hands-on\" parts of the tutorial here: https://tinyurl.com/NAACLTransfer.\r\n\r\nYou can give it a try your-self with the associated Colab notebook which is here: https://tinyurl.com/NAACLTransferColab (and a full stand-alone codebase is here: https://tinyurl.com/NAACLTransferCode).\r\n\r\nIt's possible that I just didn't spend enough time scanning the hyper-parameters or maybe these two training variants are better suited to LSTM than Transformers.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> I've tried a bit to play with these training schemes on a deep transformer for our [tutorial on Transfer Learning in Natural Language Processing](https://naacl2019.org/program/tutorials/#t4-transfer-learning-in-natural-language-processing) held at NAACL last week but I couldn't get gradual unfreezing and discriminative fine-tuning to out-perform a standard fine-tuning procedure (multi-tasking did help, however).\r\n> \r\n> You can have a look at the results by reading the \"Hands-on\" parts of the tutorial here: https://tinyurl.com/NAACLTransfer.\r\n> \r\n> You can give it a try your-self with the associated Colab notebook which is here: https://tinyurl.com/NAACLTransferColab (and a full stand-alone codebase is here: https://tinyurl.com/NAACLTransferCode).\r\n> \r\n> It's possible that I just didn't spend enough time scanning the hyper-parameters or maybe these two training variants are better suited to LSTM than Transformers.\r\n\r\nI find the standard fine-tuning procedure having a unstable issue, that different shuffle order affects a lot. I wander if unfreezing help the bert finetune get a relative stable result?\r\nThis is also mentioned in bert paper, they just use different random seeds to get a best result."
] | 1,560 | 1,566 | 1,566 | NONE | null | Three of the tips for fine-tuning proposed in ULMFIT are slanted triangular learning rates, gradual unfreezing, and discriminative fine-tuning.
I understand that BERT's default learning rate scheduler does something similar to STLR, but I was wondering if gradual unfreezing and discriminative fine-tuning are considered in BERT's fine-tuning implementations. Has anyone had experience implementing these two features in BERT fine-tuning? I'd like to hear your thoughts on it. Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/674/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/673 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/673/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/673/comments | https://api.github.com/repos/huggingface/transformers/issues/673/events | https://github.com/huggingface/transformers/issues/673 | 454,859,251 | MDU6SXNzdWU0NTQ4NTkyNTE= | 673 | LM fine-tuning without NSP | {
"login": "dchang56",
"id": 24575558,
"node_id": "MDQ6VXNlcjI0NTc1NTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/24575558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dchang56",
"html_url": "https://github.com/dchang56",
"followers_url": "https://api.github.com/users/dchang56/followers",
"following_url": "https://api.github.com/users/dchang56/following{/other_user}",
"gists_url": "https://api.github.com/users/dchang56/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dchang56/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dchang56/subscriptions",
"organizations_url": "https://api.github.com/users/dchang56/orgs",
"repos_url": "https://api.github.com/users/dchang56/repos",
"events_url": "https://api.github.com/users/dchang56/events{/privacy}",
"received_events_url": "https://api.github.com/users/dchang56/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"To your first question, the inputs will be almost identical, but the token_type_ids argument will be unused, as this is the vector that indicates the split between the two 'sentences' for the NextSentence objective. I'm not familiar with that part of the code - you might be able to just pass `None` for that argument, or maybe you'll need to pass a vector of zeros that has the right shape (i.e. the same length as the sequence). Also, I believe pre-training with just the masked LM objective works okay - it's by far the most important of the two objectives. The BERT paper didn't show any ablation studies, so I don't know what the effect of removing it will be, but I wouldn't worry too much for a fine-tuning task.\r\n\r\nSecondly, I wouldn't expect a large benefit from just fine-tuning the LM on a small labelled training corpus. The main benefit arises when you have a large corpus to fine-tune the LM on, but only a small fraction of it is labelled. \r\n\r\nAlso, I would guess that fine-tuning the LM on the dev/test examples will probably result in test performance that is optimistic compared to the performance on truly unseen data, but I'm not aware of any research in that area. However, this suggests a potential research direction - if you try including the dev/test data in the LM pre-training task and find that it significantly improves dev/test accuracy, compared to text that was not included in the LM pre-training task, that would be an interesting approach to improving the performance of language models!\r\n\r\nIt would be tricky to take advantage of this effect in practice, but you can imagine ways it might be done - for example, at inference time it might be possible to add new inputs to the LM fine-tuning corpus, and then fine-tune your language model on them followed by retraining the classifier (with your pre-existing labelled data) and only then labelling the new inputs! This would probably be too computationally expensive to be used in many production systems, especially when low latency is required, but for some purposes the improvement in data efficiency and accuracy might be worthwhile.",
"Great, thank you!\r\n\r\nSo it's understood that language modeling (or masked LM) is an effective pre-training objective for learning a general representation of the language. But do you think it makes sense to use a MLM head during the fine-tuning phase? For example, if your main task of interest is sentence classification, perhaps you could fine-tune the pre-trained model on a sentence classification head as well as a LM head in a multi-task learning setting. Intuitively it would have a regularizing effect and potentially lead to better generalization. I'd love to hear your thoughts on that idea!",
"Take a look at issue #692 , there's a recent Chinese paper where they tried fine-tuning with domain text as well as doing something very similar to what you're proposing (multi-task fine-tuning) and reported good results.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> \r\n> \r\n> Hello,\r\n> \r\n> I'm thinking about fine-tuning a BERT model using only the Masked LM pre-training objective, and I'd appreciate a bit of guidance. The most straightforward way is probably to modify the simple_lm_finetuning.py script to only do LM fine-tuning. Besides importing BertForMaskedLM instead of BertForPretraining (which has both objectives), what changes should I make, and what potential problems should I consider?\r\n> \r\n> Also, would it make sense to do MLM fine-tuning on a relatively small domain-specific corpus consisting of just the training examples from the datasets? In other words, does it make sense to do LM pretraining on a corpus that includes training examples taken from the datasets for which you want to make downstream predictions? I'm assuming including the dev/test examples in the corpus is out of the question since it'll likely overfit, but what about just the training examples? I'd like hear to people's thoughts about it.\r\n> \r\n> Lastly, was the run_lm_finetuning.py script replaced by the contents in the examples/lm_finetuning directory? If so, there are still 2 references to that script in the README that should be edited.\r\n> \r\n> @Rocketknight1\r\n\r\nHello @dchang56\r\n\r\nDid you try fine-tuning a BERT model using only the Masked LM pre-training objective ?\r\n\r\nWas that relatively as good as doing that along with next sentence prediction ?\r\n\r\nHow much was your block size (maximum sequence length) for that ?",
"> \r\n> \r\n> To your first question, the inputs will be almost identical, but the token_type_ids argument will be unused, as this is the vector that indicates the split between the two 'sentences' for the NextSentence objective. I'm not familiar with that part of the code - you might be able to just pass `None` for that argument, or maybe you'll need to pass a vector of zeros that has the right shape (i.e. the same length as the sequence). Also, I believe pre-training with just the masked LM objective works okay - it's by far the most important of the two objectives. The BERT paper didn't show any ablation studies, so I don't know what the effect of removing it will be, but I wouldn't worry too much for a fine-tuning task.\r\n> \r\n> Secondly, I wouldn't expect a large benefit from just fine-tuning the LM on a small labelled training corpus. The main benefit arises when you have a large corpus to fine-tune the LM on, but only a small fraction of it is labelled.\r\n> \r\n> Also, I would guess that fine-tuning the LM on the dev/test examples will probably result in test performance that is optimistic compared to the performance on truly unseen data, but I'm not aware of any research in that area. However, this suggests a potential research direction - if you try including the dev/test data in the LM pre-training task and find that it significantly improves dev/test accuracy, compared to text that was not included in the LM pre-training task, that would be an interesting approach to improving the performance of language models!\r\n> \r\n> It would be tricky to take advantage of this effect in practice, but you can imagine ways it might be done - for example, at inference time it might be possible to add new inputs to the LM fine-tuning corpus, and then fine-tune your language model on them followed by retraining the classifier (with your pre-existing labelled data) and only then labelling the new inputs! This would probably be too computationally expensive to be used in many production systems, especially when low latency is required, but for some purposes the improvement in data efficiency and accuracy might be worthwhile.\r\n\r\nHello @Rocketknight1\r\n\r\nwhy do you think pre-training with just the masked LM objective works okay ?\r\n\r\nis there any article or study about that , have you got any link of it ?\r\n\r\nOr, have you down any training (BERT) without NSP (next sentence prediction) , yourself ?",
"> LM fine-tuning\r\n\r\nCould be useful to you: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/language_modeling.ipynb"
] | 1,560 | 1,642 | 1,566 | NONE | null | Hello,
I'm thinking about fine-tuning a BERT model using only the Masked LM pre-training objective, and I'd appreciate a bit of guidance. The most straightforward way is probably to modify the simple_lm_finetuning.py script to only do LM fine-tuning. Besides importing BertForMaskedLM instead of BertForPretraining (which has both objectives), what changes should I make, and what potential problems should I consider?
Also, would it make sense to do MLM fine-tuning on a relatively small domain-specific corpus consisting of just the training examples from the datasets? In other words, does it make sense to do LM pretraining on a corpus that includes training examples taken from the datasets for which you want to make downstream predictions? I'm assuming including the dev/test examples in the corpus is out of the question since it'll likely overfit, but what about just the training examples? I'd like hear to people's thoughts about it.
Lastly, was the run_lm_finetuning.py script replaced by the contents in the examples/lm_finetuning directory? If so, there are still 2 references to that script in the README that should be edited.
@Rocketknight1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/673/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/672 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/672/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/672/comments | https://api.github.com/repos/huggingface/transformers/issues/672/events | https://github.com/huggingface/transformers/pull/672 | 454,644,253 | MDExOlB1bGxSZXF1ZXN0Mjg3MDUyMjg0 | 672 | Add vocabulary and model config to the finetune output | {
"login": "oliverguhr",
"id": 3495355,
"node_id": "MDQ6VXNlcjM0OTUzNTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3495355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverguhr",
"html_url": "https://github.com/oliverguhr",
"followers_url": "https://api.github.com/users/oliverguhr/followers",
"following_url": "https://api.github.com/users/oliverguhr/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverguhr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliverguhr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverguhr/subscriptions",
"organizations_url": "https://api.github.com/users/oliverguhr/orgs",
"repos_url": "https://api.github.com/users/oliverguhr/repos",
"events_url": "https://api.github.com/users/oliverguhr/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliverguhr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Nice indeed, thanks @oliverguhr!"
] | 1,560 | 1,560 | 1,560 | CONTRIBUTOR | null | If you want to use your fine-tuned model to train a classifier you will need the configuration file and the vocabulary file. This PR adds them to both pre-training scripts. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/672/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/672",
"html_url": "https://github.com/huggingface/transformers/pull/672",
"diff_url": "https://github.com/huggingface/transformers/pull/672.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/672.patch",
"merged_at": 1560524567000
} |
https://api.github.com/repos/huggingface/transformers/issues/671 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/671/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/671/comments | https://api.github.com/repos/huggingface/transformers/issues/671/events | https://github.com/huggingface/transformers/issues/671 | 454,510,586 | MDU6SXNzdWU0NTQ1MTA1ODY= | 671 | BERT what's different with step and t_total | {
"login": "xuanzebi",
"id": 26642184,
"node_id": "MDQ6VXNlcjI2NjQyMTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/26642184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xuanzebi",
"html_url": "https://github.com/xuanzebi",
"followers_url": "https://api.github.com/users/xuanzebi/followers",
"following_url": "https://api.github.com/users/xuanzebi/following{/other_user}",
"gists_url": "https://api.github.com/users/xuanzebi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xuanzebi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xuanzebi/subscriptions",
"organizations_url": "https://api.github.com/users/xuanzebi/orgs",
"repos_url": "https://api.github.com/users/xuanzebi/repos",
"events_url": "https://api.github.com/users/xuanzebi/events{/privacy}",
"received_events_url": "https://api.github.com/users/xuanzebi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,560 | 1,560 | 1,560 | NONE | null | :param t_total: how many training steps (updates) are planned
:param step: which of t_total steps we're on
def get_lr(self, step, nowarn=False):
"""
:param step: which of t_total steps we're on
:param nowarn: set to True to suppress warning regarding training beyond specified 't_total' steps
:return: learning rate multiplier for current update
"""
if self.t_total < 0:
return 1.
progress = float(step) / self.t_total
ret = self.get_lr_(progress)
# warning for exceeding t_total (only active with warmup_linear
if not nowarn and self.warn_t_total and progress > 1. and progress > self.warned_for_t_total_at_progress:
logger.warning(
"Training beyond specified 't_total'. Learning rate multiplier set to {}. Please set 't_total' of {} correctly."
.format(ret, self.__class__.__name__))
self.warned_for_t_total_at_progress = progress
# end warning
return ret | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/671/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/670 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/670/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/670/comments | https://api.github.com/repos/huggingface/transformers/issues/670/events | https://github.com/huggingface/transformers/issues/670 | 454,491,144 | MDU6SXNzdWU0NTQ0OTExNDQ= | 670 | warmup for BertAdam | {
"login": "yangdechuan",
"id": 18901990,
"node_id": "MDQ6VXNlcjE4OTAxOTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/18901990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangdechuan",
"html_url": "https://github.com/yangdechuan",
"followers_url": "https://api.github.com/users/yangdechuan/followers",
"following_url": "https://api.github.com/users/yangdechuan/following{/other_user}",
"gists_url": "https://api.github.com/users/yangdechuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yangdechuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangdechuan/subscriptions",
"organizations_url": "https://api.github.com/users/yangdechuan/orgs",
"repos_url": "https://api.github.com/users/yangdechuan/repos",
"events_url": "https://api.github.com/users/yangdechuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yangdechuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Because we don't use BertAdam in fp16 mode but the optimizer of NVIDIA's apex library.",
"OK thank you!"
] | 1,560 | 1,560 | 1,560 | NONE | null | https://github.com/huggingface/pytorch-pretrained-BERT/blob/ee0308f79ded65dac82c53dfb03e9ff7f06aeee4/examples/run_classifier.py#L860
BertAdam() can update learning rate by itself.
Why update learning rate manually here? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/670/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/669 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/669/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/669/comments | https://api.github.com/repos/huggingface/transformers/issues/669/events | https://github.com/huggingface/transformers/issues/669 | 454,050,129 | MDU6SXNzdWU0NTQwNTAxMjk= | 669 | `get_final_text` bug when dealing with chinese sentence | {
"login": "alanwang93",
"id": 13610343,
"node_id": "MDQ6VXNlcjEzNjEwMzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/13610343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alanwang93",
"html_url": "https://github.com/alanwang93",
"followers_url": "https://api.github.com/users/alanwang93/followers",
"following_url": "https://api.github.com/users/alanwang93/following{/other_user}",
"gists_url": "https://api.github.com/users/alanwang93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alanwang93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alanwang93/subscriptions",
"organizations_url": "https://api.github.com/users/alanwang93/orgs",
"repos_url": "https://api.github.com/users/alanwang93/repos",
"events_url": "https://api.github.com/users/alanwang93/events{/privacy}",
"received_events_url": "https://api.github.com/users/alanwang93/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Perhaps it's the problem of tokenizer...After stripping the lengths of two sequences changed, so `orig_text` is returned...",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,560 | 1,565 | 1,565 | NONE | null | Hi,
I set `max_answer_length` to `30`, but I still got really long answers, so I print the `tok_text`, `orig_text` and `final_text` in function `write_predictions`.
```
tok_text: 权 健 公 司 可 能 涉 及 的 刑 事 罪 名 是 否 仅 仅 是 [UNK] 虚 假 广 告 罪
orig_text: 根据相关法律,权健公司可能涉及的刑事罪名是否仅仅是“虚假广告罪”“组织、领导传销活动罪”两个罪名,应该说仍有不少需要进一步深入调查的空间
final_text: 根据相关法律,权健公司可能涉及的刑事罪名是否仅仅是“虚假广告罪”“组织、领导传销活动罪”两个罪名,应该说仍有不少需要进一步深入调查的空间
```
From `tok_text` we can see that the answer should be `权健公司可能涉及的刑事罪名是否仅仅是“虚假广告罪`, however, `final_text` we got turned out to be much longer.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/669/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/668 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/668/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/668/comments | https://api.github.com/repos/huggingface/transformers/issues/668/events | https://github.com/huggingface/transformers/pull/668 | 453,981,634 | MDExOlB1bGxSZXF1ZXN0Mjg2NTI4MTYw | 668 | apply Whole Word Masking technique | {
"login": "jeonsworld",
"id": 37530102,
"node_id": "MDQ6VXNlcjM3NTMwMTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/37530102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeonsworld",
"html_url": "https://github.com/jeonsworld",
"followers_url": "https://api.github.com/users/jeonsworld/followers",
"following_url": "https://api.github.com/users/jeonsworld/following{/other_user}",
"gists_url": "https://api.github.com/users/jeonsworld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeonsworld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeonsworld/subscriptions",
"organizations_url": "https://api.github.com/users/jeonsworld/orgs",
"repos_url": "https://api.github.com/users/jeonsworld/repos",
"events_url": "https://api.github.com/users/jeonsworld/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeonsworld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Nice, thanks @jeonsworld "
] | 1,560 | 1,560 | 1,560 | CONTRIBUTOR | null | apply Whole Word Masking technique.
referred to [link](https://github.com/google-research/bert/blob/master/create_pretraining_data.py) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/668/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/668",
"html_url": "https://github.com/huggingface/transformers/pull/668",
"diff_url": "https://github.com/huggingface/transformers/pull/668.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/668.patch",
"merged_at": 1560245351000
} |
https://api.github.com/repos/huggingface/transformers/issues/667 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/667/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/667/comments | https://api.github.com/repos/huggingface/transformers/issues/667/events | https://github.com/huggingface/transformers/issues/667 | 453,975,730 | MDU6SXNzdWU0NTM5NzU3MzA= | 667 | when I use bert-large-uncased to load bert,runtimeError occured,but base-uncase is ok | {
"login": "HooFaya",
"id": 32974470,
"node_id": "MDQ6VXNlcjMyOTc0NDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/32974470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HooFaya",
"html_url": "https://github.com/HooFaya",
"followers_url": "https://api.github.com/users/HooFaya/followers",
"following_url": "https://api.github.com/users/HooFaya/following{/other_user}",
"gists_url": "https://api.github.com/users/HooFaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HooFaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HooFaya/subscriptions",
"organizations_url": "https://api.github.com/users/HooFaya/orgs",
"repos_url": "https://api.github.com/users/HooFaya/repos",
"events_url": "https://api.github.com/users/HooFaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/HooFaya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes, probably overflow. Try a smaller batch size?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,560 | 1,566 | 1,566 | NONE | null | using BertModel.from_pretrained( path of bert-large-uncased) caused error
RuntimeError: $ Torch: invalid memory size -- maybe an overflow? at ..\aten\src\TH\THGeneral.cpp:188
But using BertModel.from_pretrained( path of bert-case-uncased) can work
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/667/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/666 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/666/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/666/comments | https://api.github.com/repos/huggingface/transformers/issues/666/events | https://github.com/huggingface/transformers/issues/666 | 453,773,212 | MDU6SXNzdWU0NTM3NzMyMTI= | 666 | GPT2 generating repetitive text | {
"login": "DEBADRIBASAK",
"id": 32904247,
"node_id": "MDQ6VXNlcjMyOTA0MjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/32904247?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DEBADRIBASAK",
"html_url": "https://github.com/DEBADRIBASAK",
"followers_url": "https://api.github.com/users/DEBADRIBASAK/followers",
"following_url": "https://api.github.com/users/DEBADRIBASAK/following{/other_user}",
"gists_url": "https://api.github.com/users/DEBADRIBASAK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DEBADRIBASAK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DEBADRIBASAK/subscriptions",
"organizations_url": "https://api.github.com/users/DEBADRIBASAK/orgs",
"repos_url": "https://api.github.com/users/DEBADRIBASAK/repos",
"events_url": "https://api.github.com/users/DEBADRIBASAK/events{/privacy}",
"received_events_url": "https://api.github.com/users/DEBADRIBASAK/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Have you tried the provided GPT-2 generation example? It's here: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_gpt2.py",
"> Have you tried the provided GPT-2 generation example? It's here: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_gpt2.py\r\n\r\nHey, I encountered the same issue. I tried using the example you provided but it tends to produce repetitive text much more often than earlier versions of the library as well (from around 1-2 months back). Thank you very much for all the work! ",
"> Have you tried the provided GPT-2 generation example? It's here: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_gpt2.py\r\n\r\nI tried the example. It is working properly. I think with `torch.argmax` there is chance of repetitive text generation. If we sample using `torch.multinomial`, there is always some variation.\r\n\r\n\r\n",
"`torch.argmax` is basically top_k with 1 which is very bad for creating \"human like\" sentences. A better way to sample is using Nucleus Sampling [https://arxiv.org/abs/1904.09751](url).\r\nNot sure whether this is implemented in PyTorch yet.\r\nEDIT: I have found this code from @thomwolf [https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317](url) that implements Nucleus sampling.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
" generating repetitive text when using GPU, but this not happens when using CPU, anyone knows how to solve this weird issue?",
"> when using GPU, but this not happens when using CPU, anyone knows how to solve this weird issue?\r\n\r\nI am facing the same issue. Can someone lead in this problem? "
] | 1,559 | 1,605 | 1,568 | NONE | null | I was trying to use the pretrained GPT2LMHeadModel for generating texts by feeding some initial English words. But it is always generating repetitive texts.
Input: All
Output: All All the same, the same, the same, the same, the same, the same, the same, the same, the same, the same, the same, the same,
Here is my code:
`import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
from pytorch_pretrained_BERT.pytorch_pretrained_bert.modeling_gpt2 import GPT2LMHeadModel
from pytorch_pretrained_BERT.pytorch_pretrained_bert.tokenization_gpt2 import GPT2Tokenizer
from pytorch_pretrained_BERT.pytorch_pretrained_bert.optimization_openai import OpenAIAdam
from tqdm import tqdm
import torch.optim as optim
import random
import time
import os
import sys
import random
import argparse
from pathlib import Path
from torch.utils.data import Dataset,TensorDataset,DataLoader,SequentialSampler,RandomSampler
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained("gpt2")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
df = pd.read_csv("gpt_test.csv",sep="\t")
df = df.values
#context = tokenizer.encode(context)
model.to(device)
source = []
generated = []
for l in tqdm(range(len(df))):
source.append(str(df[l,-2]))
context = tokenizer.encode(str(df[l,-2]))
past = None
for i in range(40):
input_ids = torch.tensor([context])
input_ids = input_ids.to(device)
pred,_ = model(input_ids=input_ids)
predictions = torch.argmax(pred[0,-1,:]).item()
context.append(predictions)
if(predictions==2):
break
generated_text = tokenizer.decode(context)
generated.append(generated_text)
df1 = pd.DataFrame({'Source': source,'Generated': generated})
df1.to_csv("./result_with_gpt.csv",sep="\t")`
Can someone point out the mistake? I will be highly grateful if the response is fast. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/666/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/665 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/665/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/665/comments | https://api.github.com/repos/huggingface/transformers/issues/665/events | https://github.com/huggingface/transformers/issues/665 | 453,744,807 | MDU6SXNzdWU0NTM3NDQ4MDc= | 665 | GPT-2 medium and large release? | {
"login": "g-karthik",
"id": 3851993,
"node_id": "MDQ6VXNlcjM4NTE5OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3851993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-karthik",
"html_url": "https://github.com/g-karthik",
"followers_url": "https://api.github.com/users/g-karthik/followers",
"following_url": "https://api.github.com/users/g-karthik/following{/other_user}",
"gists_url": "https://api.github.com/users/g-karthik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-karthik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-karthik/subscriptions",
"organizations_url": "https://api.github.com/users/g-karthik/orgs",
"repos_url": "https://api.github.com/users/g-karthik/repos",
"events_url": "https://api.github.com/users/g-karthik/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-karthik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Take a look at the `attention` branch @g-karthik:\r\n\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/attention/pytorch_pretrained_bert/modeling_gpt2.py#L42-L45",
"Thanks @julien-c, I had not looked at the file in the `attention` branch!",
"What is the recommended hardware setup for fine-tuning GPT2 medium?"
] | 1,559 | 1,560 | 1,560 | NONE | null | I presume the below model is GPT-2 small.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/ee0308f79ded65dac82c53dfb03e9ff7f06aeee4/pytorch_pretrained_bert/modeling_gpt2.py#L42
When do you plan on supporting the medium (already released by OpenAI) and large versions (not released by OpenAI) of GPT-2?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/665/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/664 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/664/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/664/comments | https://api.github.com/repos/huggingface/transformers/issues/664/events | https://github.com/huggingface/transformers/issues/664 | 453,309,623 | MDU6SXNzdWU0NTMzMDk2MjM= | 664 | Padding in GPT-2 | {
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"No padding implemented in GPT-2, you have to add implement your-self if you want e.g. by adding a special token but note that:\r\n- GPT-2 doesn't like left side padding (doesn't mix well with a causal transformer having absolute positions)\r\n- right-side padding is often not necessary (the causal mask means that right context is ignored anyway).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> * GPT-2 doesn't like left side padding (doesn't mix well with a causal transformer having absolute positions)\r\n\r\nTo those stumbling on this issue, this doesn't seem to be a problem anymore. [#3021](https://github.com/huggingface/transformers/issues/3021#issuecomment-1232149031)\r\n\r\n"
] | 1,559 | 1,666 | 1,567 | NONE | null | How do I add padding in GTP2?
I get something like this when I add zero in front to pad the sequences, but then I found out that 0 is actually not "[PAD]" but "!".
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1639, 481]
The zeros change the result quite a lot, not that it totally ruins it, but it makes the results less precise and usually order of the most frequent predicted words is altered.
So how to we add padding there.
I have tried to follow docs, but I didn't find a way to add something analog to attention mask BERT.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/664/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/664/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/663 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/663/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/663/comments | https://api.github.com/repos/huggingface/transformers/issues/663/events | https://github.com/huggingface/transformers/issues/663 | 453,088,216 | MDU6SXNzdWU0NTMwODgyMTY= | 663 | Accumulation | {
"login": "Eric-Wallace",
"id": 11711825,
"node_id": "MDQ6VXNlcjExNzExODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/11711825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Eric-Wallace",
"html_url": "https://github.com/Eric-Wallace",
"followers_url": "https://api.github.com/users/Eric-Wallace/followers",
"following_url": "https://api.github.com/users/Eric-Wallace/following{/other_user}",
"gists_url": "https://api.github.com/users/Eric-Wallace/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Eric-Wallace/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Eric-Wallace/subscriptions",
"organizations_url": "https://api.github.com/users/Eric-Wallace/orgs",
"repos_url": "https://api.github.com/users/Eric-Wallace/repos",
"events_url": "https://api.github.com/users/Eric-Wallace/events{/privacy}",
"received_events_url": "https://api.github.com/users/Eric-Wallace/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,559 | 1,559 | 1,559 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/663/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/662 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/662/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/662/comments | https://api.github.com/repos/huggingface/transformers/issues/662/events | https://github.com/huggingface/transformers/issues/662 | 452,974,452 | MDU6SXNzdWU0NTI5NzQ0NTI= | 662 | MRPC / SQuAD stuck in "Running training" | {
"login": "AndreasFdev",
"id": 48248291,
"node_id": "MDQ6VXNlcjQ4MjQ4Mjkx",
"avatar_url": "https://avatars.githubusercontent.com/u/48248291?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreasFdev",
"html_url": "https://github.com/AndreasFdev",
"followers_url": "https://api.github.com/users/AndreasFdev/followers",
"following_url": "https://api.github.com/users/AndreasFdev/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreasFdev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreasFdev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreasFdev/subscriptions",
"organizations_url": "https://api.github.com/users/AndreasFdev/orgs",
"repos_url": "https://api.github.com/users/AndreasFdev/repos",
"events_url": "https://api.github.com/users/AndreasFdev/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreasFdev/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Update: specify device works for at least 1 GPU\r\n\r\nexport CUDA_VISIBLE_DEVICES=0\r\npython run_classifier.py \\\r\n\r\n\r\nmore than 1 GPU still not working:\r\n\r\nexport CUDA_VISIBLE_DEVICES=0,1\r\npython run_classifier.py \\\r\n",
"@AndreasFdev Your distributed training setting is False.",
"Problem:\r\nP2P GPU traffic fails with enabled IOMMU, unless the cards are behind PLX switch.\r\n\r\nSolution:\r\nTo disable IOMMU edit /etc/default/grub \r\n#GRUB_CMDLINE_LINUX=\"\" <----- Original commented\r\nGRUB_CMDLINE_LINUX=\"iommu=soft\" <------ Change\r\n\r\nSource:\r\nhttps://github.com/pytorch/pytorch/issues/1637\r\n\r\nThanks to all who had a look"
] | 1,559 | 1,560 | 1,560 | NONE | null | Hi there!
I am stuck since days.
ubuntu 19.04 (tried 18.04 also)
NVIDIA-SMI 418.74 Driver Version: 418.74
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
>>> import torch; torch.cuda.current_device(); torch.cuda.device_count(); torch.cuda.get_device_name(0); torch.cuda.get_device_name(1); torch.cuda.is_available(); exit()
0
2
'GeForce GTX 1080'
'GeForce GTX 1070 Ti'
True
Anaconda
tried Python 3.7 and now 3.6 (update: and 3.5 don't work also)
tried WITH APEX and now without
conda list
# packages in environment at /home/andreas/anaconda3/envs/pytorchbert:
#
# Name Version Build Channel
atomicwrites 1.3.0 pypi_0 pypi
attrs 19.1.0 pypi_0 pypi
blas 1.0 mkl
blis 0.2.4 pypi_0 pypi
boto3 1.9.162 pypi_0 pypi
botocore 1.12.162 pypi_0 pypi
bzip2 1.0.6 h14c3975_5
ca-certificates 2019.5.15 0
certifi 2019.3.9 py36_0
cffi 1.12.3 py36h2e261b9_0
chardet 3.0.4 pypi_0 pypi
cmake 3.14.0 h52cb24c_0
cudatoolkit 10.0.130 0
cudnn 7.6.0 cuda10.0_0 anaconda
cymem 2.0.2 pypi_0 pypi
docutils 0.14 pypi_0 pypi
en-core-web-sm 2.1.0 pypi_0 pypi
expat 2.2.6 he6710b0_0
freetype 2.9.1 h8a8886c_1
ftfy 5.5.1 pypi_0 pypi
google-pasta 0.1.7 pypi_0 pypi
idna 2.8 pypi_0 pypi
importlib-metadata 0.17 pypi_0 pypi
intel-openmp 2019.3 199
jmespath 0.9.4 pypi_0 pypi
joblib 0.13.2 pypi_0 pypi
jpeg 9b h024ee3a_2
jsonschema 3.0.1 pypi_0 pypi
krb5 1.16.1 h173b8e3_7
libcurl 7.64.1 h20c2e04_0
libedit 3.1.20181209 hc058e9b_0
libffi 3.2.1 hd88cf55_4
libgcc-ng 8.2.0 hdf63c60_1
libgfortran-ng 7.3.0 hdf63c60_0
libpng 1.6.37 hbc83047_0
libssh2 1.8.2 h1ba5d50_0
libstdcxx-ng 8.2.0 hdf63c60_1
libtiff 4.0.10 h2733197_2
mkl 2019.3 199
mkl-include 2019.3 199
mkl_fft 1.0.12 py36ha843d7b_0
mkl_random 1.0.2 py36hd81dba3_0
more-itertools 7.0.0 pypi_0 pypi
murmurhash 1.0.2 pypi_0 pypi
ncurses 6.1 he6710b0_1
ninja 1.9.0 py36hfd86e86_0
numpy 1.16.4 py36h7e9f1db_0
numpy-base 1.16.4 py36hde5b4d6_0
olefile 0.46 py36_0
openssl 1.1.1c h7b6447c_1
packaging 19.0 pypi_0 pypi
pandas 0.24.2 py36he6710b0_0
pillow 6.0.0 py36h34e0f95_0
pip 19.1.1 py36_0
plac 0.9.6 pypi_0 pypi
pluggy 0.12.0 pypi_0 pypi
preshed 2.0.1 pypi_0 pypi
py 1.8.0 pypi_0 pypi
pycparser 2.19 py36_0
pyparsing 2.4.0 pypi_0 pypi
pyrsistent 0.15.2 pypi_0 pypi
pytest 4.6.2 pypi_0 pypi
python 3.6.8 h0371630_0
python-dateutil 2.8.0 py36_0
pytorch 1.1.0 py3.6_cuda10.0.130_cudnn7.5.1_0 pytorch
pytz 2019.1 py_0
readline 7.0 h7b6447c_5
regex 2019.6.5 pypi_0 pypi
requests 2.22.0 pypi_0 pypi
rhash 1.3.8 h1ba5d50_0
s3transfer 0.2.1 pypi_0 pypi
scikit-learn 0.21.2 pypi_0 pypi
scipy 1.2.1 py36h7c811a0_0
setuptools 41.0.1 py36_0
six 1.12.0 py36_0
sklearn 0.0 pypi_0 pypi
spacy 2.1.4 pypi_0 pypi
sqlite 3.28.0 h7b6447c_0
srsly 0.0.5 pypi_0 pypi
tb-nightly 1.14.0a20190605 pypi_0 pypi
tf-estimator-nightly 1.14.0.dev2019060601 pypi_0 pypi
tf-nightly-gpu 1.14.1.dev20190606 pypi_0 pypi
thinc 7.0.4 pypi_0 pypi
tk 8.6.8 hbc83047_0
torch 1.1.0 pypi_0 pypi
torchvision 0.3.0 py36_cu10.0.130_1 pytorch
tqdm 4.32.1 pypi_0 pypi
urllib3 1.25.3 pypi_0 pypi
wasabi 0.2.2 pypi_0 pypi
wcwidth 0.1.7 pypi_0 pypi
wheel 0.33.4 py36_0
wrapt 1.11.1 pypi_0 pypi
xz 5.2.4 h14c3975_4
yaml 0.1.7 had09818_2
zipp 0.5.1 pypi_0 pypi
zlib 1.2.11 h7b6447c_3
zstd 1.3.7 h0b5b093_0
Failed test:
========
python -m pytest -sv tests/
tests/modeling_gpt2_test.py::GPT2ModelTest::test_config_to_json_file PASSED
tests/modeling_gpt2_test.py::GPT2ModelTest::test_config_to_json_string PASSED
tests/modeling_gpt2_test.py::GPT2ModelTest::test_default PASSED
tests/modeling_gpt2_test.py::GPT2ModelTest::test_model_from_pretrained SKIPPED
tests/modeling_openai_test.py::OpenAIGPTModelTest::test_config_to_json_file PASSED
tests/modeling_openai_test.py::OpenAIGPTModelTest::test_config_to_json_string PASSED
tests/modeling_openai_test.py::OpenAIGPTModelTest::test_default PASSED
tests/modeling_openai_test.py::OpenAIGPTModelTest::test_model_from_pretrained SKIPPED
tests/modeling_test.py::BertModelTest::test_config_to_json_file PASSED
tests/modeling_test.py::BertModelTest::test_config_to_json_string PASSED
tests/modeling_test.py::BertModelTest::test_default PASSED
tests/modeling_test.py::BertModelTest::test_model_from_pretrained SKIPPED
tests/modeling_transfo_xl_test.py::TransfoXLModelTest::test_config_to_json_file PASSED
tests/modeling_transfo_xl_test.py::TransfoXLModelTest::test_config_to_json_string PASSED
tests/modeling_transfo_xl_test.py::TransfoXLModelTest::test_default PASSED
tests/modeling_transfo_xl_test.py::TransfoXLModelTest::test_model_from_pretrained SKIPPED
tests/optimization_test.py::OptimizationTest::test_adam PASSED
tests/optimization_test.py::ScheduleInitTest::test_bert_sched_init PASSED
tests/optimization_test.py::ScheduleInitTest::test_openai_sched_init PASSED
tests/optimization_test.py::WarmupCosineWithRestartsTest::test_it [0. 0. 0. 0. 0.]
[1. 1. 1. 1. 1.]
PASSED
tests/tokenization_gpt2_test.py::GPT2TokenizationTest::test_full_tokenizer PASSED
100%|███████████████████████████████████████| 1042301/1042301 [00:01<00:00, 741907.79B/s]
100%|█████████████████████████████████████████| 456318/456318 [00:00<00:00, 704099.11B/s]
PASSED
tests/tokenization_openai_test.py::OpenAIGPTTokenizationTest::test_full_tokenizer PASSED
tests/tokenization_openai_test.py::OpenAIGPTTokenizationTest::test_tokenizer_from_pretrained SKIPPED
tests/tokenization_test.py::TokenizationTest::test_basic_tokenizer_lower PASSED
tests/tokenization_test.py::TokenizationTest::test_basic_tokenizer_no_lower PASSED
tests/tokenization_test.py::TokenizationTest::test_chinese PASSED
tests/tokenization_test.py::TokenizationTest::test_full_tokenizer PASSED
tests/tokenization_test.py::TokenizationTest::test_is_control PASSED
tests/tokenization_test.py::TokenizationTest::test_is_punctuation PASSED
tests/tokenization_test.py::TokenizationTest::test_is_whitespace PASSED
tests/tokenization_test.py::TokenizationTest::test_tokenizer_from_pretrained SKIPPED
tests/tokenization_test.py::TokenizationTest::test_wordpiece_tokenizer PASSED
tests/tokenization_transfo_xl_test.py::TransfoXLTokenizationTest::test_full_tokenizer building vocab from /tmp/transfo_xl_tokenizer_test.txt
final vocab size 9
PASSED
tests/tokenization_transfo_xl_test.py::TransfoXLTokenizationTest::test_full_tokenizer_lower PASSED
tests/tokenization_transfo_xl_test.py::TransfoXLTokenizationTest::test_full_tokenizer_no_lower PASSED
tests/tokenization_transfo_xl_test.py::TransfoXLTokenizationTest::test_tokenizer_from_pretrained SKIPPED
=================================== warnings summary ====================================
/home/andreas/anaconda3/envs/pytorchbert/lib/python3.6/site-packages/_pytest/mark/structures.py:337
/home/andreas/anaconda3/envs/pytorchbert/lib/python3.6/site-packages/_pytest/mark/structures.py:337: PytestUnknownMarkWarning: Unknown pytest.mark.slow - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/latest/mark.html
PytestUnknownMarkWarning,
-- Docs: https://docs.pytest.org/en/latest/warnings.html
Used script:
=========
export GLUE_DIR=/data/glue_data
export TASK_NAME=MRPC
python run_classifier.py \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/$TASK_NAME \
--bert_model bert-base-uncased \
--max_seq_length 128 \
--train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/$TASK_NAME/
06/05/2019 12:06:17 - INFO - __main__ - device: cuda n_gpu: 2, distributed training: False, 16-bits training: False
06/05/2019 12:06:17 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /home/andreas/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
06/05/2019 12:06:17 - INFO - __main__ - LOOKING AT /data/glue_data/MRPC/train.tsv
06/05/2019 12:06:18 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /home/andreas/.pytorch_pretrained_bert/distributed_-1/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba
06/05/2019 12:06:18 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /home/andreas/.pytorch_pretrained_bert/distributed_-1/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmp_0dlskh7
06/05/2019 12:06:21 - INFO - pytorch_pretrained_bert.modeling - Model config {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 30522
}
06/05/2019 12:06:23 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
06/05/2019 12:06:23 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
06/05/2019 12:06:26 - INFO - __main__ - Writing example 0 of 3668
06/05/2019 12:06:26 - INFO - __main__ - *** Example ***
06/05/2019 12:06:26 - INFO - __main__ - guid: train-1
06/05/2019 12:06:26 - INFO - __main__ - tokens: [CLS] am ##ro ##zi accused his brother , whom he called " the witness " , of deliberately di ##stor ##ting his evidence . [SEP] referring to him as only " the witness " , am ##ro ##zi accused his brother of deliberately di ##stor ##ting his evidence . [SEP]
06/05/2019 12:06:26 - INFO - __main__ - input_ids: 101 2572 3217 5831 5496 2010 2567 1010 3183 2002 2170 1000 1996 7409 1000 1010 1997 9969 4487 23809 3436 2010 3350 1012 102 7727 2000 2032 2004 2069 1000 1996 7409 1000 1010 2572 3217 5831 5496 2010 2567 1997 9969 4487 23809 3436 2010 3350 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - label: 1 (id = 1)
06/05/2019 12:06:26 - INFO - __main__ - *** Example ***
06/05/2019 12:06:26 - INFO - __main__ - guid: train-2
06/05/2019 12:06:26 - INFO - __main__ - tokens: [CLS] yu ##ca ##ip ##a owned dominic ##k ' s before selling the chain to safe ##way in 1998 for $ 2 . 5 billion . [SEP] yu ##ca ##ip ##a bought dominic ##k ' s in 1995 for $ 69 ##3 million and sold it to safe ##way for $ 1 . 8 billion in 1998 . [SEP]
06/05/2019 12:06:26 - INFO - __main__ - input_ids: 101 9805 3540 11514 2050 3079 11282 2243 1005 1055 2077 4855 1996 4677 2000 3647 4576 1999 2687 2005 1002 1016 1012 1019 4551 1012 102 9805 3540 11514 2050 4149 11282 2243 1005 1055 1999 2786 2005 1002 6353 2509 2454 1998 2853 2009 2000 3647 4576 2005 1002 1015 1012 1022 4551 1999 2687 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - label: 0 (id = 0)
06/05/2019 12:06:26 - INFO - __main__ - *** Example ***
06/05/2019 12:06:26 - INFO - __main__ - guid: train-3
06/05/2019 12:06:26 - INFO - __main__ - tokens: [CLS] they had published an advertisement on the internet on june 10 , offering the cargo for sale , he added . [SEP] on june 10 , the ship ' s owners had published an advertisement on the internet , offering the explosives for sale . [SEP]
06/05/2019 12:06:26 - INFO - __main__ - input_ids: 101 2027 2018 2405 2019 15147 2006 1996 4274 2006 2238 2184 1010 5378 1996 6636 2005 5096 1010 2002 2794 1012 102 2006 2238 2184 1010 1996 2911 1005 1055 5608 2018 2405 2019 15147 2006 1996 4274 1010 5378 1996 14792 2005 5096 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - label: 1 (id = 1)
06/05/2019 12:06:26 - INFO - __main__ - *** Example ***
06/05/2019 12:06:26 - INFO - __main__ - guid: train-4
06/05/2019 12:06:26 - INFO - __main__ - tokens: [CLS] around 03 ##35 gm ##t , tab shares were up 19 cents , or 4 . 4 % , at a $ 4 . 56 , having earlier set a record high of a $ 4 . 57 . [SEP] tab shares jumped 20 cents , or 4 . 6 % , to set a record closing high at a $ 4 . 57 . [SEP]
06/05/2019 12:06:26 - INFO - __main__ - input_ids: 101 2105 6021 19481 13938 2102 1010 21628 6661 2020 2039 2539 16653 1010 2030 1018 1012 1018 1003 1010 2012 1037 1002 1018 1012 5179 1010 2383 3041 2275 1037 2501 2152 1997 1037 1002 1018 1012 5401 1012 102 21628 6661 5598 2322 16653 1010 2030 1018 1012 1020 1003 1010 2000 2275 1037 2501 5494 2152 2012 1037 1002 1018 1012 5401 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - label: 0 (id = 0)
06/05/2019 12:06:26 - INFO - __main__ - *** Example ***
06/05/2019 12:06:26 - INFO - __main__ - guid: train-5
06/05/2019 12:06:26 - INFO - __main__ - tokens: [CLS] the stock rose $ 2 . 11 , or about 11 percent , to close friday at $ 21 . 51 on the new york stock exchange . [SEP] pg & e corp . shares jumped $ 1 . 63 or 8 percent to $ 21 . 03 on the new york stock exchange on friday . [SEP]
06/05/2019 12:06:26 - INFO - __main__ - input_ids: 101 1996 4518 3123 1002 1016 1012 2340 1010 2030 2055 2340 3867 1010 2000 2485 5958 2012 1002 2538 1012 4868 2006 1996 2047 2259 4518 3863 1012 102 18720 1004 1041 13058 1012 6661 5598 1002 1015 1012 6191 2030 1022 3867 2000 1002 2538 1012 6021 2006 1996 2047 2259 4518 3863 2006 5958 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
06/05/2019 12:06:26 - INFO - __main__ - label: 1 (id = 1)
06/05/2019 12:06:28 - INFO - __main__ - ***** Running training *****
06/05/2019 12:06:28 - INFO - __main__ - Num examples = 3668
06/05/2019 12:06:28 - INFO - __main__ - Batch size = 32
06/05/2019 12:06:28 - INFO - __main__ - Num steps = 342
Epoch: 0%| | 0/3 [00:00<?, ?it/s]
At this point the script is stucked.
Once I managed to ctrl-c twice and got this error:
threading.py", line 1048, in _wait_for_tstate_lock elif lock.acquire(block, timeout):
I should mention, that I am usually a windows user and just installed ubuntu to practice machine learning
Best regards
Andreas | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/662/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/661 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/661/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/661/comments | https://api.github.com/repos/huggingface/transformers/issues/661/events | https://github.com/huggingface/transformers/issues/661 | 452,885,815 | MDU6SXNzdWU0NTI4ODU4MTU= | 661 | How to load a existing model | {
"login": "yuanjie-ai",
"id": 20265321,
"node_id": "MDQ6VXNlcjIwMjY1MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/20265321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuanjie-ai",
"html_url": "https://github.com/yuanjie-ai",
"followers_url": "https://api.github.com/users/yuanjie-ai/followers",
"following_url": "https://api.github.com/users/yuanjie-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/yuanjie-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuanjie-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuanjie-ai/subscriptions",
"organizations_url": "https://api.github.com/users/yuanjie-ai/orgs",
"repos_url": "https://api.github.com/users/yuanjie-ai/repos",
"events_url": "https://api.github.com/users/yuanjie-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuanjie-ai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Follow the instructions in the readme?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,559 | 1,567 | 1,567 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/661/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/660 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/660/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/660/comments | https://api.github.com/repos/huggingface/transformers/issues/660/events | https://github.com/huggingface/transformers/issues/660 | 452,808,239 | MDU6SXNzdWU0NTI4MDgyMzk= | 660 | Recommended batch size and epochs for finetuning on large data | {
"login": "okgrammer",
"id": 26020190,
"node_id": "MDQ6VXNlcjI2MDIwMTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/26020190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/okgrammer",
"html_url": "https://github.com/okgrammer",
"followers_url": "https://api.github.com/users/okgrammer/followers",
"following_url": "https://api.github.com/users/okgrammer/following{/other_user}",
"gists_url": "https://api.github.com/users/okgrammer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/okgrammer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/okgrammer/subscriptions",
"organizations_url": "https://api.github.com/users/okgrammer/orgs",
"repos_url": "https://api.github.com/users/okgrammer/repos",
"events_url": "https://api.github.com/users/okgrammer/events{/privacy}",
"received_events_url": "https://api.github.com/users/okgrammer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@okgrammer Larger batch size often means lower accuracy but faster epochs. You can try it by doing several runs of varying batch size while keeping other params constant. \r\n\r\nSee, especially, https://arxiv.org/pdf/1804.07612.pdf",
"> In the original paper, BERT model is fine-tuned on downstream NLP tasks, where the number of instances for each task is in the order of thousands to hundreds of thousands. In my case, I have about 5 million samples. I'm curious whether there are recommended batch size and epochs for such training size? I'm fine-tuning bert-base-multilingual on 4 GPUs and there is a lot of unused GPU memory with the default batch size of 32. Even after increasing it to 128 there is still free available memory.\r\n\r\nI have exactly the same issue. Can anyone help? \r\nThe pretraining is really slow with more than 90% GPU memory available. No matter how I increase the batch size, the GPU memory usage is minimal."
] | 1,559 | 1,592 | 1,565 | NONE | null | In the original paper, BERT model is fine-tuned on downstream NLP tasks, where the number of instances for each task is in the order of thousands to hundreds of thousands. In my case, I have about 5 million samples. I'm curious whether there are recommended batch size and epochs for such training size? I'm fine-tuning bert-base-multilingual on 4 GPUs and there is a lot of unused GPU memory with the default batch size of 32. Even after increasing it to 128 there is still free available memory. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/660/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/660/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/659 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/659/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/659/comments | https://api.github.com/repos/huggingface/transformers/issues/659/events | https://github.com/huggingface/transformers/issues/659 | 452,098,572 | MDU6SXNzdWU0NTIwOTg1NzI= | 659 | Whole Word Masking Models update | {
"login": "frankxu2004",
"id": 6738274,
"node_id": "MDQ6VXNlcjY3MzgyNzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6738274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankxu2004",
"html_url": "https://github.com/frankxu2004",
"followers_url": "https://api.github.com/users/frankxu2004/followers",
"following_url": "https://api.github.com/users/frankxu2004/following{/other_user}",
"gists_url": "https://api.github.com/users/frankxu2004/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankxu2004/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankxu2004/subscriptions",
"organizations_url": "https://api.github.com/users/frankxu2004/orgs",
"repos_url": "https://api.github.com/users/frankxu2004/repos",
"events_url": "https://api.github.com/users/frankxu2004/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankxu2004/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It's not yet but thanks for the pointer, we can probably add it fairly easily. I'll have a look.",
"+10000 This would be very helpful!",
"Hi, \r\n\r\nI converted the cased and uncased whole-word-masking models using the command line tool. If you're interested in adding these to the repository, I've uploaded them to [this](https://www.kaggle.com/bkkaggle/bert-large-whole-word-masking) kaggle dataset. ",
"Is this resolved? These seem to be available at head, and I don't see anything immediately wrong when I try them...",
"Yes they are working fine, I've added them to master last week.\r\nThey will be advertised in the next release.\r\nWhen fine-tuned with run_squad they give pretty nice results: `exact_match: 86.91, f1: 93.15`.\r\nI've included a version fine-tuned on SQuAD as well.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Does Whole Word Masking support bert base as well?"
] | 1,559 | 1,571 | 1,566 | NONE | null | Recently Google updated their TF implementation (`https://github.com/google-research/bert`) with Whole Word Masking Models that masks whole random word instead of just random wordpieces, which results in a performance gain.
Just wondering if this will be implemented here?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/659/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/658 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/658/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/658/comments | https://api.github.com/repos/huggingface/transformers/issues/658/events | https://github.com/huggingface/transformers/issues/658 | 452,020,617 | MDU6SXNzdWU0NTIwMjA2MTc= | 658 | SQuAD 1.1 very low evaluation score when using `--fp16` | {
"login": "knuser",
"id": 51361990,
"node_id": "MDQ6VXNlcjUxMzYxOTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/51361990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/knuser",
"html_url": "https://github.com/knuser",
"followers_url": "https://api.github.com/users/knuser/followers",
"following_url": "https://api.github.com/users/knuser/following{/other_user}",
"gists_url": "https://api.github.com/users/knuser/gists{/gist_id}",
"starred_url": "https://api.github.com/users/knuser/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/knuser/subscriptions",
"organizations_url": "https://api.github.com/users/knuser/orgs",
"repos_url": "https://api.github.com/users/knuser/repos",
"events_url": "https://api.github.com/users/knuser/events{/privacy}",
"received_events_url": "https://api.github.com/users/knuser/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I took example code from https://github.com/huggingface/pytorch-pretrained-BERT#fine-tuning-bert-large-on-gpus (which has additional option `--loss_scale 128`). Still getting very low test scores:\r\n```\r\n$ python evaluate-v1.1.py dev-v1.1.json ../output/debug_squad_fp16/predictions.json \r\n{\"exact_match\": 0.5771050141911069, \"f1\": 8.853750220358535}\r\n```\r\n\r\nIs there any known bug in current (on `v0.6.2`) PyTorch BERT implementation or is this my setup?\r\n\r\n---\r\n\r\n**UPDATE**\r\nProbably there is a bug in current stable `v0.6.2` version. When running same command on latest `master` branch I'm getting good results:\r\n```\r\n$ python evaluate-v1.1.py dev-v1.1.json ../output/debug_squad_fp16/predictions.json\r\n{\"exact_match\": 81.45695364238411, \"f1\": 88.71433452234619}\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,559 | 1,565 | 1,565 | NONE | null | I'm replicating SQuAD 1.1 https://github.com/huggingface/pytorch-pretrained-BERT/tree/v0.6.2#squad on latest release `v0.6.2`.
My setup:
* GeForce RTX 2080 Ti
* Driver Version: 418.43
* CUDA Version: 10.1
* Linux Ubuntu 18.10
* pytorch 1.1.0 (installed via conda: py3.7_cuda10.0.130_cudnn7.5.1_0)
* latest `apex` package
I'm on latest release:
```
$ git status
HEAD detached at v0.6.2
```
-------------
I'm replicating fine tuning bert on SQuAD 1.1. When executing without `--fp16` I'm getting expected result:
```
$ python evaluate-v1.1.py dev-v1.1.json /tmp/debug_squad/predictions.json
{"exact_match": 81.58940397350993, "f1": 88.6984251786611}
```
Full command:
```
export SQUAD_DIR=squad_11_data
python run_squad.py \
--bert_model bert-base-uncased \
--do_train \
--do_predict \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--train_batch_size 8 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
```
Same experiment with `--fp16` I'm getting very poor results:
```
$ python evaluate-v1.1.py dev-v1.1.json /tmp/debug_squad_fp16_apex/predictions.json
{"exact_match": 0.47303689687795647, "f1": 8.678859681492447}
```
Full command:
```
export SQUAD_DIR=squad_11_data
python run_squad.py \
--bert_model bert-base-uncased \
--do_train \
--do_predict \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--train_batch_size 8 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad_fp16/ \
--fp16
```
I remember that I saw information about gradient overflow several times (sorry, don't have more details I have lost output logs).
How to get decent results when using `--fp16`? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/658/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/657 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/657/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/657/comments | https://api.github.com/repos/huggingface/transformers/issues/657/events | https://github.com/huggingface/transformers/issues/657 | 451,878,667 | MDU6SXNzdWU0NTE4Nzg2Njc= | 657 | How to use different learning rates in the classifier example. | {
"login": "svishnu88",
"id": 3419879,
"node_id": "MDQ6VXNlcjM0MTk4Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3419879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/svishnu88",
"html_url": "https://github.com/svishnu88",
"followers_url": "https://api.github.com/users/svishnu88/followers",
"following_url": "https://api.github.com/users/svishnu88/following{/other_user}",
"gists_url": "https://api.github.com/users/svishnu88/gists{/gist_id}",
"starred_url": "https://api.github.com/users/svishnu88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/svishnu88/subscriptions",
"organizations_url": "https://api.github.com/users/svishnu88/orgs",
"repos_url": "https://api.github.com/users/svishnu88/repos",
"events_url": "https://api.github.com/users/svishnu88/events{/privacy}",
"received_events_url": "https://api.github.com/users/svishnu88/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Did anyone conduct different learning rate in different layers when fine-tuning BERT?\r\n\r\nThanks. ",
"[This paper](https://arxiv.org/abs/1905.05583) suggests a \"layer-wise decreasing learning rate\" improves BERT for text classification tasks, but I can't see any option for it in the [AdamW](https://huggingface.co/transformers/main_classes/optimizer_schedules.html#adamw-pytorch) optimizer. If it is helpful for text classification, it would be great to see it implemented.",
"Did anyone conduct different learning rate in different layers when fine-tuning BERT?\r\n\r\nThanks",
"You can do this in PyTorch (not sure about tf) and my understanding from this thread is that `huggingface`'s `AdamW` is now equivalent to PT's `AdamW` (see https://github.com/huggingface/transformers/issues/3407) so it should be equivalent - it would be great to get confirmation of this from someone more familiar with the huggingface codebase. \r\n\r\nSee here for PT multiple rates: https://discuss.pytorch.org/t/how-to-set-a-different-learning-rate-for-a-single-layer-in-a-network/48552/4\r\n\r\n",
"As an update to the above - it actually _is_ possible to use the `huggingface` `AdamW` directly with different learning rates. \r\n\r\nSay you wanted to train your new parameters at x10 the learning rate of the pre-trained bert-variant parameters (in this case held as `model.bert`) you would do:\r\n```python\r\nfrom transformers import AdamW\r\n# define model etc.\r\n...\r\n\r\npretrained = model.bert.parameters()\r\n# Get names of pretrained parameters (including `bert.` prefix)\r\npretrained_names = [f'bert.{k}' for (k, v) in model.bert.named_parameters()]\r\n\r\nnew_params= [v for k, v in model.named_parameters() if k not in pretrained_names]\r\n\r\noptimizer = AdamW(\r\n [{'params': pretrained}, {'params': new_params, 'lr': learning_rate * 10}],\r\n lr=learning_rate,\r\n)\r\n```"
] | 1,559 | 1,610 | 1,564 | NONE | null | HI,
I am trying to use different learning rates for the bert and classifier. I am assuming that I can just say model.parameters and classifier.parameters like below.
`optimizer_grouped_parameters = [
{'params': model.bert.parameters(), 'lr': 0.001},
{'params': model.classifier.parameters(), 'lr': 0.01}
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]`
Can someone confirm if its correct way of using it, and also why no weight decay is specified for particular layers.
Thanks,
Vishnu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/657/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/656 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/656/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/656/comments | https://api.github.com/repos/huggingface/transformers/issues/656/events | https://github.com/huggingface/transformers/issues/656 | 451,356,460 | MDU6SXNzdWU0NTEzNTY0NjA= | 656 | Use of GPT for multilingual LM | {
"login": "DEBADRIBASAK",
"id": 32904247,
"node_id": "MDQ6VXNlcjMyOTA0MjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/32904247?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DEBADRIBASAK",
"html_url": "https://github.com/DEBADRIBASAK",
"followers_url": "https://api.github.com/users/DEBADRIBASAK/followers",
"following_url": "https://api.github.com/users/DEBADRIBASAK/following{/other_user}",
"gists_url": "https://api.github.com/users/DEBADRIBASAK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DEBADRIBASAK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DEBADRIBASAK/subscriptions",
"organizations_url": "https://api.github.com/users/DEBADRIBASAK/orgs",
"repos_url": "https://api.github.com/users/DEBADRIBASAK/repos",
"events_url": "https://api.github.com/users/DEBADRIBASAK/events{/privacy}",
"received_events_url": "https://api.github.com/users/DEBADRIBASAK/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,559 | 1,565 | 1,565 | NONE | null | Is there any way of using the openai-gpt module for multilingual language modelling? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/656/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/656/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/655 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/655/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/655/comments | https://api.github.com/repos/huggingface/transformers/issues/655/events | https://github.com/huggingface/transformers/pull/655 | 451,127,843 | MDExOlB1bGxSZXF1ZXN0Mjg0Mjk5OTQx | 655 | Finish torchhub interfaces | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,559 | 1,566 | 1,560 | MEMBER | null | Adding GPT2 and Transformer XL compatibilities for torchhub.
Fix some typos in docs.
@thomwolf could you have a look on the doc changes in `modeling_transfo_xl.py` more specifically?
Otherwise, I think it should be good. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/655/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/655",
"html_url": "https://github.com/huggingface/transformers/pull/655",
"diff_url": "https://github.com/huggingface/transformers/pull/655.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/655.patch",
"merged_at": 1560524529000
} |
https://api.github.com/repos/huggingface/transformers/issues/654 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/654/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/654/comments | https://api.github.com/repos/huggingface/transformers/issues/654/events | https://github.com/huggingface/transformers/issues/654 | 450,924,907 | MDU6SXNzdWU0NTA5MjQ5MDc= | 654 | use of special tokens in gpt2? | {
"login": "2016csb1062",
"id": 36623119,
"node_id": "MDQ6VXNlcjM2NjIzMTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/36623119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/2016csb1062",
"html_url": "https://github.com/2016csb1062",
"followers_url": "https://api.github.com/users/2016csb1062/followers",
"following_url": "https://api.github.com/users/2016csb1062/following{/other_user}",
"gists_url": "https://api.github.com/users/2016csb1062/gists{/gist_id}",
"starred_url": "https://api.github.com/users/2016csb1062/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/2016csb1062/subscriptions",
"organizations_url": "https://api.github.com/users/2016csb1062/orgs",
"repos_url": "https://api.github.com/users/2016csb1062/repos",
"events_url": "https://api.github.com/users/2016csb1062/events{/privacy}",
"received_events_url": "https://api.github.com/users/2016csb1062/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,559 | 1,565 | 1,565 | NONE | null | 
i am actually new to this field , I am trying to use gpt2 for sequence classification task in which i am adding "<|endoftext|>" after each sequence and using last hidden state for classification. My doubt is what is the use of special tokens such as "<|endoftext|>" if the gpt2tokenizer does not recognize them?
thank you and any suggestion in classification task is appreciated!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/654/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/653 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/653/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/653/comments | https://api.github.com/repos/huggingface/transformers/issues/653/events | https://github.com/huggingface/transformers/issues/653 | 450,697,161 | MDU6SXNzdWU0NTA2OTcxNjE= | 653 | Different Results from version 0.4.0 to version 0.5.0 | {
"login": "ShengleiH",
"id": 23250283,
"node_id": "MDQ6VXNlcjIzMjUwMjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/23250283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShengleiH",
"html_url": "https://github.com/ShengleiH",
"followers_url": "https://api.github.com/users/ShengleiH/followers",
"following_url": "https://api.github.com/users/ShengleiH/following{/other_user}",
"gists_url": "https://api.github.com/users/ShengleiH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShengleiH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShengleiH/subscriptions",
"organizations_url": "https://api.github.com/users/ShengleiH/orgs",
"repos_url": "https://api.github.com/users/ShengleiH/repos",
"events_url": "https://api.github.com/users/ShengleiH/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShengleiH/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, no we didn't change the weights. Can you share a sample on which the results are different?",
"Hi @thomwolf , thanks for your quick reply. I found even version 0.4.0 is different to version 0.2.0 and 0.3.0. I trained the model on v0.4.0, and then I tried to load the model using v0.2.0, here is the mismatch of keys:\r\n\r\n```\r\nRuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:\r\n\r\nMissing key(s) in state_dict: \"bert.embeddings.LayerNorm.gamma\", \"bert.embeddings.LayerNorm.beta\",...\r\n\r\nUnexpected key(s) in state_dict: \"bert.embeddings.LayerNorm.weight\", \"bert.embeddings.LayerNorm.bias\",...\r\n```\r\nThis is also appears when I trained the model on v0.5.0+ and tried to load the model using v0.2.0 and v0.3.0.\r\n\r\nHowever, the first step loss are the same to v0.2.0, v0.3.0 and v0.4.0, loss=0.7228, but the first step loss are different to v0.5.0+ whose loss is 0.7091. And the final converge results are different too. I have set a seed to reproduce the results.",
"Oh, sorry, I found the mismatch problem is from my loading scripts (the 2nd method in [Serialization best-practices](https://github.com/huggingface/pytorch-pretrained-BERT#serialization-best-practices)), because I didn't use the mapping from old_keys to new_keys as you did in 'from_pretrained()' function.",
"Hi @thomwolf , I have found where the differences are. It is the different 'init_bert_weights' that makes the results different.\r\n\r\nIn version 0.2.0, 0.3.0 and 0.4.0, you use 'normal_' to initialize the 'BertLayerNorm':\r\n\r\n```\r\ndef init_bert_weights(self, module):\r\n \"\"\" Initialize the weights.\r\n \"\"\"\r\n if isinstance(module, (nn.Linear, nn.Embedding)):\r\n # Slightly different from the TF version which uses truncated_normal for initialization\r\n # cf https://github.com/pytorch/pytorch/pull/5617\r\n module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)\r\n elif isinstance(module, BertLayerNorm):\r\n module.beta.data.normal_(mean=0.0, std=self.config.initializer_range)\r\n module.gamma.data.normal_(mean=0.0, std=self.config.initializer_range)\r\n if isinstance(module, nn.Linear) and module.bias is not None:\r\n module.bias.data.zero_()\r\n```\r\n\r\nbut in version 0.5.0+, you use 'zeros_' and 'ones_' to initialize 'BertLayerNorm':\r\n\r\n```\r\ndef init_bert_weights(self, module):\r\n \"\"\" Initialize the weights.\r\n \"\"\"\r\n if isinstance(module, (nn.Linear, nn.Embedding)):\r\n # Slightly different from the TF version which uses truncated_normal for initialization\r\n # cf https://github.com/pytorch/pytorch/pull/5617\r\n module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)\r\n elif isinstance(module, BertLayerNorm):\r\n module.bias.data.zero_()\r\n module.weight.data.fill_(1.0)\r\n if isinstance(module, nn.Linear) and module.bias is not None:\r\n module.bias.data.zero_()\r\n```\r\n\r\nBy the way, after correct mapping from old_keys to new_keys, the old version pertained model could be loaded by new version with no different results! Thank you for sharing such great work with us!"
] | 1,559 | 1,561 | 1,559 | NONE | null | Hi, I found the results after training is different from version 0.4.0 to version 0.5.0. I have fixed all initialization to reproduce the results. And I also test version 0.2.0 and 0.3.0, the results are the same to version 0.4.0, but from version 0.5.0 +, the results is different. I am wondering that have you trained a new model, so the weights changed? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/653/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/652 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/652/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/652/comments | https://api.github.com/repos/huggingface/transformers/issues/652/events | https://github.com/huggingface/transformers/issues/652 | 450,626,940 | MDU6SXNzdWU0NTA2MjY5NDA= | 652 | RuntimeError: CUDA error: device-side assert triggered | {
"login": "pengshuang",
"id": 11802795,
"node_id": "MDQ6VXNlcjExODAyNzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/11802795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pengshuang",
"html_url": "https://github.com/pengshuang",
"followers_url": "https://api.github.com/users/pengshuang/followers",
"following_url": "https://api.github.com/users/pengshuang/following{/other_user}",
"gists_url": "https://api.github.com/users/pengshuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pengshuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pengshuang/subscriptions",
"organizations_url": "https://api.github.com/users/pengshuang/orgs",
"repos_url": "https://api.github.com/users/pengshuang/repos",
"events_url": "https://api.github.com/users/pengshuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/pengshuang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Rerun with environmental variable `CUDA_LAUNCH_BLOCKING=1` and see what line it crashed on.\r\n\r\nThis is almost always an out-of-bounds error on some embeddings lookup. Usually positional embeddings, but it could be word embeddings or segment embeddings.",
"HI @stephenroller , I do set environmental variable `CUDA_LAUNCH_BLOCKING=1` and get the previous log. I will check my word embeddings or segment embeddings.",
"Then it’s definitely that you’ve got a bad index into the positional embeddings.",
"But when I removed the positional embeddings, it still posts the error.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> But when I removed the positional embeddings, it still posts the error.\r\n\r\nI met the same problem. Did you find how to solve it?",
"I met the same problem...",
"> I met the same problem...\r\n\r\nI solved it myself. I met the problem when I distill the big model,but the vocab size of teacher model and student model is different. I modify the vocab size and it works.",
"> > I met the same problem...\r\n> \r\n> I solved it myself. I met the problem when I distill the big model,but the vocab size of teacher model and student model is different. I modify the vocab size and it works.\r\n\r\nI'm experiencing the same problem. Can you please elaborate on what to do ?",
"> But when I removed the positional embeddings, it still posts the error.\r\n\r\nHi! Could you please elaborate on how did you solve this error? I removed the positional embeddings and then this error is showing. Sorry about dragging a problem from years ago!",
"Hi! Could you please elaborate on how did you solve this error? Thanks"
] | 1,559 | 1,706 | 1,564 | NONE | null | I got this error when using simple_lm_finetuning.py to continue to train a bert model. Could anyone can help? Thanks a lot.
Here is the cuda and python trace. I confirm that my input max_length don't over **max_position_embeddings**
```
/pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [329,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [329,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
```
```
Loading Train Dataset input_lm.txt
Traceback (most recent call last):
File "simple_lm_finetuning.py", line 646, in <module>
main()
File "simple_lm_finetuning.py", line 592, in main
loss = model(input_ids, segment_ids, input_mask, lm_label_ids, is_next)
File "/home/jianfeng.ps/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/jianfeng.ps/bert-mrc/pytorch_pretrained_bert/modeling.py", line 783, in forward
File "/home/jianfeng.ps/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/jianfeng.ps/bert-mrc/pytorch_pretrained_bert/modeling.py", line 714, in forward
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
File "/home/jianfeng.ps/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/jianfeng.ps/bert-mrc/pytorch_pretrained_bert/modeling.py", line 261, in forward
position_embeddings = self.position_embeddings(position_ids)
File "/home/jianfeng.ps/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/jianfeng.ps/anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 118, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/jianfeng.ps/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 1454, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: CUDA error: device-side assert triggered
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/652/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/652/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/651 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/651/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/651/comments | https://api.github.com/repos/huggingface/transformers/issues/651/events | https://github.com/huggingface/transformers/pull/651 | 450,617,986 | MDExOlB1bGxSZXF1ZXN0MjgzODk5NzM2 | 651 | Add GPT* compatibility to torchhub | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Amazing, thanks a lot @VictorSanh!"
] | 1,559 | 1,566 | 1,559 | MEMBER | null | I'll add GPT2 for torchhub later. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/651/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/651",
"html_url": "https://github.com/huggingface/transformers/pull/651",
"diff_url": "https://github.com/huggingface/transformers/pull/651.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/651.patch",
"merged_at": 1559306693000
} |
https://api.github.com/repos/huggingface/transformers/issues/650 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/650/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/650/comments | https://api.github.com/repos/huggingface/transformers/issues/650/events | https://github.com/huggingface/transformers/pull/650 | 450,475,886 | MDExOlB1bGxSZXF1ZXN0MjgzNzg3OTA4 | 650 | default in __init__s for classification BERT models | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,559 | 1,576 | 1,559 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/650/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/650",
"html_url": "https://github.com/huggingface/transformers/pull/650",
"diff_url": "https://github.com/huggingface/transformers/pull/650.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/650.patch",
"merged_at": 1559245994000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/649 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/649/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/649/comments | https://api.github.com/repos/huggingface/transformers/issues/649/events | https://github.com/huggingface/transformers/issues/649 | 450,298,705 | MDU6SXNzdWU0NTAyOTg3MDU= | 649 | fine-tuning BERT, next sentence prediction loss is not decreasing | {
"login": "lazyfuzzypringle",
"id": 10270306,
"node_id": "MDQ6VXNlcjEwMjcwMzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/10270306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lazyfuzzypringle",
"html_url": "https://github.com/lazyfuzzypringle",
"followers_url": "https://api.github.com/users/lazyfuzzypringle/followers",
"following_url": "https://api.github.com/users/lazyfuzzypringle/following{/other_user}",
"gists_url": "https://api.github.com/users/lazyfuzzypringle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lazyfuzzypringle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lazyfuzzypringle/subscriptions",
"organizations_url": "https://api.github.com/users/lazyfuzzypringle/orgs",
"repos_url": "https://api.github.com/users/lazyfuzzypringle/repos",
"events_url": "https://api.github.com/users/lazyfuzzypringle/events{/privacy}",
"received_events_url": "https://api.github.com/users/lazyfuzzypringle/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"same not working for me. NSP loss is not converging even though MLM loss is converging."
] | 1,559 | 1,595 | 1,565 | NONE | null | I run simple_lm_finetuning and monitor the loss change of next_sentence_loss and masked_lm_loss. The loss of masked token can converge but the next_sentence_loss is not decreasing.
For now I tried: tuning learning rate, change to optimizers from pytorch.optim, I checked the input_ids and the input looks good... I also ignore the loss from masked tokens and specifically train next sentence prediction but it didn't work.
Did anyone face the same problem? Or where should I check to make it work? Thanks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/649/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/649/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/648 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/648/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/648/comments | https://api.github.com/repos/huggingface/transformers/issues/648/events | https://github.com/huggingface/transformers/issues/648 | 450,295,950 | MDU6SXNzdWU0NTAyOTU5NTA= | 648 | [Dropout] why there is no dropout for the dev and eval? | {
"login": "Albert-Ma",
"id": 7343136,
"node_id": "MDQ6VXNlcjczNDMxMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7343136?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Albert-Ma",
"html_url": "https://github.com/Albert-Ma",
"followers_url": "https://api.github.com/users/Albert-Ma/followers",
"following_url": "https://api.github.com/users/Albert-Ma/following{/other_user}",
"gists_url": "https://api.github.com/users/Albert-Ma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Albert-Ma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Albert-Ma/subscriptions",
"organizations_url": "https://api.github.com/users/Albert-Ma/orgs",
"repos_url": "https://api.github.com/users/Albert-Ma/repos",
"events_url": "https://api.github.com/users/Albert-Ma/events{/privacy}",
"received_events_url": "https://api.github.com/users/Albert-Ma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@thomwolf Thanks!",
"line 604",
"> line 604\r\n\r\nThanks so much. my mistake.\r\n\r\nDo you know why there is no dropout for the dev and eval?\r\n",
"first of all, no one uses dropout at evaluation stage as it's a regularizer. The difference of implementation is due to the fact that a dropout layer in pytorch behaves differently once you turn on a evaluation model (i.e., model.eval()). So even though it has a dropout layer it doesn't take any effect (p=0) at evaluation.",
"> first of all, no one uses dropout at evaluation stage as it's a regularizer. The difference of implementation is due to the fact that a dropout layer in pytorch behaves differently once you turn on a evaluation model (i.e., model.eval()). So even though it has a dropout layer it doesn't take any effect (p=0) at evaluation.\r\n\r\nThanks for help me clarify this problem."
] | 1,559 | 1,562 | 1,562 | NONE | null | I do not see any dropout layer after `get_pooled_output()` in the tf version referred to [here](https://github.com/google-research/bert/blob/master/run_classifier.py#L590). Why do you add a dropout layer in your implemention? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/648/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/647 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/647/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/647/comments | https://api.github.com/repos/huggingface/transformers/issues/647/events | https://github.com/huggingface/transformers/issues/647 | 450,256,737 | MDU6SXNzdWU0NTAyNTY3Mzc= | 647 | No softmax activation in BertForTokenClassification | {
"login": "shudima",
"id": 7312293,
"node_id": "MDQ6VXNlcjczMTIyOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7312293?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shudima",
"html_url": "https://github.com/shudima",
"followers_url": "https://api.github.com/users/shudima/followers",
"following_url": "https://api.github.com/users/shudima/following{/other_user}",
"gists_url": "https://api.github.com/users/shudima/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shudima/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shudima/subscriptions",
"organizations_url": "https://api.github.com/users/shudima/orgs",
"repos_url": "https://api.github.com/users/shudima/repos",
"events_url": "https://api.github.com/users/shudima/events{/privacy}",
"received_events_url": "https://api.github.com/users/shudima/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's because `nn.CrossEntropyLoss` already has a Softmax integrated in the module:\r\nhttps://pytorch.org/docs/stable/nn.html?highlight=crossentropy#torch.nn.CrossEntropyLoss",
"I see it now. Thanks!"
] | 1,559 | 1,559 | 1,559 | NONE | null | The BertForTokenClassification class has the `classifier` member, which is a linear layer.
In the `forward` function, it treats it's out as probabilities (Cross Entropy for loss) but there's no softmax. Is there a reason for that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/647/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/646 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/646/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/646/comments | https://api.github.com/repos/huggingface/transformers/issues/646/events | https://github.com/huggingface/transformers/pull/646 | 450,131,288 | MDExOlB1bGxSZXF1ZXN0MjgzNTE2Njg5 | 646 | Fix link in README | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,559 | 1,560 | 1,560 | CONTRIBUTOR | null | Link was not working. Fixed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/646/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/646",
"html_url": "https://github.com/huggingface/transformers/pull/646",
"diff_url": "https://github.com/huggingface/transformers/pull/646.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/646.patch",
"merged_at": 1560524266000
} |
https://api.github.com/repos/huggingface/transformers/issues/645 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/645/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/645/comments | https://api.github.com/repos/huggingface/transformers/issues/645/events | https://github.com/huggingface/transformers/issues/645 | 449,670,054 | MDU6SXNzdWU0NDk2NzAwNTQ= | 645 | BertAdam's get_lr() not return correct learning rate | {
"login": "light8lee",
"id": 13532435,
"node_id": "MDQ6VXNlcjEzNTMyNDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/13532435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/light8lee",
"html_url": "https://github.com/light8lee",
"followers_url": "https://api.github.com/users/light8lee/followers",
"following_url": "https://api.github.com/users/light8lee/following{/other_user}",
"gists_url": "https://api.github.com/users/light8lee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/light8lee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/light8lee/subscriptions",
"organizations_url": "https://api.github.com/users/light8lee/orgs",
"repos_url": "https://api.github.com/users/light8lee/repos",
"events_url": "https://api.github.com/users/light8lee/events{/privacy}",
"received_events_url": "https://api.github.com/users/light8lee/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,559 | 1,564 | 1,564 | NONE | null | when the initial values of `BertAdam`'s `params` have `requires_grad=False` Parameter, it will just continue the loop in `step()` function line 251, after step() function, when I want to use `get_lr()` to get the current learning rate, the state of this Parameter is a empty dict, so the function just return `[0]` in line 230. I think it is out of expectation, maybe more checking of the grad is needed somewhere. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/645/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/644 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/644/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/644/comments | https://api.github.com/repos/huggingface/transformers/issues/644/events | https://github.com/huggingface/transformers/issues/644 | 449,572,712 | MDU6SXNzdWU0NDk1NzI3MTI= | 644 | RuntimeError: cublas runtime error : an internal operation failed at /pytorch/aten/src/THC/THCBlas.cu:258 | {
"login": "chenbingxiayu",
"id": 23647595,
"node_id": "MDQ6VXNlcjIzNjQ3NTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/23647595?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenbingxiayu",
"html_url": "https://github.com/chenbingxiayu",
"followers_url": "https://api.github.com/users/chenbingxiayu/followers",
"following_url": "https://api.github.com/users/chenbingxiayu/following{/other_user}",
"gists_url": "https://api.github.com/users/chenbingxiayu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenbingxiayu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenbingxiayu/subscriptions",
"organizations_url": "https://api.github.com/users/chenbingxiayu/orgs",
"repos_url": "https://api.github.com/users/chenbingxiayu/repos",
"events_url": "https://api.github.com/users/chenbingxiayu/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenbingxiayu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Python version is 3.6, the cuda version is 10.1.105, cudnn version is 7.51, and pytorch version is 1.0.1, platform is ubuntu 14.04. And GPU is Titan GTX 1080Ti with 11g memory.\r\nThanks all of you!",
"I am running into almost exactly the same issue. python3.6 cuda 10.0 ubuntu 18.04 and a 1080ti as well. Not sure what it is coming from. I see lots of google results from 2080ti's but those looked like architecture issues. ",
"I tried running with CUDA_LAUNCH_BLOCKING=1 because others have stated this will get us a more accurate error. This led to RuntimeError: Creating MTGP constants failed. at /pytorch/aten/src/THC/THCTensorRandom.cu:33\r\n. Doing some searching as to why this might be. Some people said indexing error somewhere. seems to be happening in the dropout area though so I dont see how that is possible. ",
"https://github.com/pytorch/pytorch/issues/20489\r\nhttps://github.com/pytorch/pytorch/pull/20886\r\nThese look related",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Python version is 3.6, the cuda version is 10.0 , cudnn version is 7.5, and pytorch version is 1.1.0, platform is ubuntu 16.04. And GPU is Titan GTX 2080Ti.\r\nSolve this problem."
] | 1,559 | 1,573 | 1,565 | NONE | null | I implemented my model referring to the implementation of the examples, when my model running several batches, the error shown in the title occurs.
The whole trace is as followed:
Traceback (most recent call last):
File "train.py", line 549, in <module>
train()
File "/media/***/***/***/***/***.py", line 120, in forward
topic_representation = self.bertRepresentation(topic_ids,topic_type_ids,topic_mask)
File "/media/***/***/***/***/***.py", line 113, in bertRepresentation
_, pooled_output = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/***/Downloads/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py", line 736, in forward
output_all_encoded_layers=output_all_encoded_layers)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/***/Downloads/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py", line 409, in forward
hidden_states = layer_module(hidden_states, attention_mask)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/***/Downloads/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py", line 394, in forward
attention_output = self.attention(hidden_states, attention_mask)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/***/Downloads/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py", line 352, in forward
self_output = self.self(input_tensor, attention_mask)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/***/Downloads/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py", line 303, in forward
mixed_query_layer = self.query(hidden_states)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 67, in forward
return F.linear(input, self.weight, self.bias)
File "/home/***/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1354, in linear
output = input.matmul(weight.t())
RuntimeError: cublas runtime error : an internal operation failed at /pytorch/aten/src/THC/THCBlas.cu:258
In the trace, self.bert is the function to call the BERT model, I am looking forward to all the suggestions from all of you. I have google this problem and there are no methods could solve this problem.
I also run my model using CPU, although it is very slow, no errors. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/644/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/643 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/643/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/643/comments | https://api.github.com/repos/huggingface/transformers/issues/643/events | https://github.com/huggingface/transformers/issues/643 | 449,401,810 | MDU6SXNzdWU0NDk0MDE4MTA= | 643 | FileNotFoundError: [Errno 2] No such file or directory: 'uncased_L-12_H-768_A-12\\pytorch_model.bin' | {
"login": "DEBADRIBASAK",
"id": 32904247,
"node_id": "MDQ6VXNlcjMyOTA0MjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/32904247?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DEBADRIBASAK",
"html_url": "https://github.com/DEBADRIBASAK",
"followers_url": "https://api.github.com/users/DEBADRIBASAK/followers",
"following_url": "https://api.github.com/users/DEBADRIBASAK/following{/other_user}",
"gists_url": "https://api.github.com/users/DEBADRIBASAK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DEBADRIBASAK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DEBADRIBASAK/subscriptions",
"organizations_url": "https://api.github.com/users/DEBADRIBASAK/orgs",
"repos_url": "https://api.github.com/users/DEBADRIBASAK/repos",
"events_url": "https://api.github.com/users/DEBADRIBASAK/events{/privacy}",
"received_events_url": "https://api.github.com/users/DEBADRIBASAK/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I got the same problem. Did you solve the problem?",
"> I got the same problem. Did you solve the problem?\r\n\r\nYes. Actually the project folder of this implementation does not contain the `pytorch_model.bin` file. For loading the actual pretrained model, you have to use `BertModel.from_pretrained('bert-base-uncased').` Here as the parameter, you can mention name of any one of the bert variations available. After fine-tuning, you can save the `state_dict()` of the model in the `pytorch_model.bin` file and use it later."
] | 1,559 | 1,560 | 1,560 | NONE | null | I was just trying to get familiar with the pytorch implementation of BERT. I tried with the examles mentioned in the README file. The statement : **tokenizer = BertTokenizer.from_pretrained(BERT_PRETRAINED_PATH,do_lower_case=True)** works perfectly but when I try the same with **model = BertForMaskedLM.from_pretrained(BERT_PRETRAINED_PATH)**, it shows the error : **FileNotFoundError: [Errno 2] No such file or directory: 'uncased_L-12_H-768_A-12\\pytorch_model.bin'** Can someone point out where I am going wrong? The pretrained bert weights are available in the same directory and the **BERT_PRETRAINED_PATH** is **Path("uncased_L-12_H-768_A-12")** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/643/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/642 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/642/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/642/comments | https://api.github.com/repos/huggingface/transformers/issues/642/events | https://github.com/huggingface/transformers/issues/642 | 449,349,871 | MDU6SXNzdWU0NDkzNDk4NzE= | 642 | Performing optimization on CPU | {
"login": "ikuyamada",
"id": 426342,
"node_id": "MDQ6VXNlcjQyNjM0Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/426342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ikuyamada",
"html_url": "https://github.com/ikuyamada",
"followers_url": "https://api.github.com/users/ikuyamada/followers",
"following_url": "https://api.github.com/users/ikuyamada/following{/other_user}",
"gists_url": "https://api.github.com/users/ikuyamada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ikuyamada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ikuyamada/subscriptions",
"organizations_url": "https://api.github.com/users/ikuyamada/orgs",
"repos_url": "https://api.github.com/users/ikuyamada/repos",
"events_url": "https://api.github.com/users/ikuyamada/events{/privacy}",
"received_events_url": "https://api.github.com/users/ikuyamada/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Oh yes you are right, we removed that, I'll update the readme",
"Thank you for your reply!\r\nBecause Adam's averages (i.e., `next_m` and `next_v`) are large, it is very helpful to support performing optimization step on CPU to reduce GPU memory.\r\nTherefore, I would like to know how you implemented this.\r\nDid you move the parameters to CPU (e.g., `grad.to(torch.device('cpu'))`) in `BertAdam.step` function?",
"@ikuyamada In the old repository, the model parameters on gpu obtained by forward and backward are copied to the optimizer that stores the model parameters on cpu, and the updated model parameters are returned to the model on gpu at each training step.\r\nCheck https://github.com/huggingface/pytorch-pretrained-BERT/blob/v0.3.0/examples/run_squad.py#L682\r\n\r\n@thomwolf I have questions. Why did you decide to delete `optimize_on_cpu`? Does cpu and gpu parameter transfer affect training speed?\r\n",
"@ryonakamura thank you for the pointer!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,559 | 1,568 | 1,568 | CONTRIBUTOR | null | In README.md, it is explained that the optimization step was performed on CPU to train a SQuAD model.
> perform the optimization step on CPU to store Adam's averages in RAM.
https://github.com/huggingface/pytorch-pretrained-BERT#fine-tuning-bert-large-on-gpus
Is it still supported in `run_squad.py`?
If I understand correctly, the current implementation always places the Adam's averages on GPUs if we train the model using GPUs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/642/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/641 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/641/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/641/comments | https://api.github.com/repos/huggingface/transformers/issues/641/events | https://github.com/huggingface/transformers/issues/641 | 449,349,557 | MDU6SXNzdWU0NDkzNDk1NTc= | 641 | The prediction accuracy for the masked token is ZERO when using the pretrained model. Does it make sense? | {
"login": "lazyfuzzypringle",
"id": 10270306,
"node_id": "MDQ6VXNlcjEwMjcwMzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/10270306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lazyfuzzypringle",
"html_url": "https://github.com/lazyfuzzypringle",
"followers_url": "https://api.github.com/users/lazyfuzzypringle/followers",
"following_url": "https://api.github.com/users/lazyfuzzypringle/following{/other_user}",
"gists_url": "https://api.github.com/users/lazyfuzzypringle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lazyfuzzypringle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lazyfuzzypringle/subscriptions",
"organizations_url": "https://api.github.com/users/lazyfuzzypringle/orgs",
"repos_url": "https://api.github.com/users/lazyfuzzypringle/repos",
"events_url": "https://api.github.com/users/lazyfuzzypringle/events{/privacy}",
"received_events_url": "https://api.github.com/users/lazyfuzzypringle/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,559 | 1,559 | 1,559 | NONE | null | Hi, this question might be not relavent to the code but to BERT, I still hope I can find someone to help me out.
I run the pretrained model BertForPreTraining and test it on my own text data. Because BERT has knowledge about language so I expect it to be able to predict the masked tokens with a reasonable accuracy, however it is zero. I run the model with model.eval() mode. Does it make sense that it predicts the masked tokens with zero?
Thanks for the help. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/641/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/640 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/640/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/640/comments | https://api.github.com/repos/huggingface/transformers/issues/640/events | https://github.com/huggingface/transformers/pull/640 | 448,757,307 | MDExOlB1bGxSZXF1ZXN0MjgyNDM3NjM1 | 640 | Support latest multi language bert fine tune | {
"login": "Barqawiz",
"id": 2751950,
"node_id": "MDQ6VXNlcjI3NTE5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2751950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Barqawiz",
"html_url": "https://github.com/Barqawiz",
"followers_url": "https://api.github.com/users/Barqawiz/followers",
"following_url": "https://api.github.com/users/Barqawiz/following{/other_user}",
"gists_url": "https://api.github.com/users/Barqawiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Barqawiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Barqawiz/subscriptions",
"organizations_url": "https://api.github.com/users/Barqawiz/orgs",
"repos_url": "https://api.github.com/users/Barqawiz/repos",
"events_url": "https://api.github.com/users/Barqawiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/Barqawiz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, thanks!"
] | 1,558 | 1,560 | 1,560 | CONTRIBUTOR | null | **Affected function**: fine tune example file
**Update summary**:
- Fix issue of bert-base-multilingual not found by fixing uncased version name in argument dict
- Add support for cased version by adding the right name into argument dict | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/640/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/640",
"html_url": "https://github.com/huggingface/transformers/pull/640",
"diff_url": "https://github.com/huggingface/transformers/pull/640.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/640.patch",
"merged_at": 1560524223000
} |
https://api.github.com/repos/huggingface/transformers/issues/639 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/639/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/639/comments | https://api.github.com/repos/huggingface/transformers/issues/639/events | https://github.com/huggingface/transformers/issues/639 | 448,236,891 | MDU6SXNzdWU0NDgyMzY4OTE= | 639 | Isn't it too few activations? | {
"login": "mzmey37",
"id": 9392265,
"node_id": "MDQ6VXNlcjkzOTIyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9392265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mzmey37",
"html_url": "https://github.com/mzmey37",
"followers_url": "https://api.github.com/users/mzmey37/followers",
"following_url": "https://api.github.com/users/mzmey37/following{/other_user}",
"gists_url": "https://api.github.com/users/mzmey37/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mzmey37/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mzmey37/subscriptions",
"organizations_url": "https://api.github.com/users/mzmey37/orgs",
"repos_url": "https://api.github.com/users/mzmey37/repos",
"events_url": "https://api.github.com/users/mzmey37/events{/privacy}",
"received_events_url": "https://api.github.com/users/mzmey37/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,564 | 1,564 | NONE | null | Hello, I have a question. In BertLayer part of model we see, that in BertAttention module we do attention (nonlinear action) and selfOutput (linear transformation as it is dense + BN). Then we do BertIntermediate, starting with linear transformation(which means, we have dense, BN, dense transformations going one by one). Why don't we have nonlinear activation between BertAttention and BertIntermediate in BertLayer? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/639/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/638 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/638/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/638/comments | https://api.github.com/repos/huggingface/transformers/issues/638/events | https://github.com/huggingface/transformers/issues/638 | 448,020,154 | MDU6SXNzdWU0NDgwMjAxNTQ= | 638 | GPT-2 Tokenizer error! | {
"login": "PhungVanDuy",
"id": 28798474,
"node_id": "MDQ6VXNlcjI4Nzk4NDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/28798474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhungVanDuy",
"html_url": "https://github.com/PhungVanDuy",
"followers_url": "https://api.github.com/users/PhungVanDuy/followers",
"following_url": "https://api.github.com/users/PhungVanDuy/following{/other_user}",
"gists_url": "https://api.github.com/users/PhungVanDuy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhungVanDuy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhungVanDuy/subscriptions",
"organizations_url": "https://api.github.com/users/PhungVanDuy/orgs",
"repos_url": "https://api.github.com/users/PhungVanDuy/repos",
"events_url": "https://api.github.com/users/PhungVanDuy/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhungVanDuy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe you can find the solution at #537 ",
"> Maybe you can find the solution at #537\r\n\r\nThank for your support! It works for me!"
] | 1,558 | 1,559 | 1,559 | CONTRIBUTOR | null | I tried to use GPT-2 to encode with `text = "This story gets more ridiculous by the hour! And, I love that people are sending these guys dildos in the mail now. But… if they really think there's a happy ending in this for any of them, I think they're even more deluded than all of the jokes about them assume."` but it error.

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/638/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/637 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/637/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/637/comments | https://api.github.com/repos/huggingface/transformers/issues/637/events | https://github.com/huggingface/transformers/issues/637 | 447,986,927 | MDU6SXNzdWU0NDc5ODY5Mjc= | 637 | run_squad.py F1 and EM score are differ from tensorflow version | {
"login": "rkcalnode",
"id": 50977760,
"node_id": "MDQ6VXNlcjUwOTc3NzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/50977760?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rkcalnode",
"html_url": "https://github.com/rkcalnode",
"followers_url": "https://api.github.com/users/rkcalnode/followers",
"following_url": "https://api.github.com/users/rkcalnode/following{/other_user}",
"gists_url": "https://api.github.com/users/rkcalnode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rkcalnode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rkcalnode/subscriptions",
"organizations_url": "https://api.github.com/users/rkcalnode/orgs",
"repos_url": "https://api.github.com/users/rkcalnode/repos",
"events_url": "https://api.github.com/users/rkcalnode/events{/privacy}",
"received_events_url": "https://api.github.com/users/rkcalnode/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I also can't get the same result on my squad dataset",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,564 | 1,564 | NONE | null | I use squad1.1 data from https://www.kaggle.com/stanfordu/stanford-question-answering-dataset
I ran convert_tf_checkpoint_to_pytorch.py (using google's uncased_L-12_H-768_A-12 model) and run_squad.py. result was
```
{"exact_match": 73.30179754020814, "f1": 82.10116863001393}
```
Alse I ran run_squad_hvd.py from https://github.com/lambdal/bert (it wraps original tensorflow BERT with horovod) and result was
```
{"exact_match": 80.66225165562913, "f1": 88.09365604437863}
```
bert_config.json is the same.
my parameter's are
pytorch ver
```
--train_batch_size", default=32,
--learning_rate", default=5e-5.
--max_seq_length", default=384,
--doc_stride", default=128,
--max_query_length", default=64
--predict_batch_size",default=8
```
tensorflow ver
```
--train_batch_size=8,
--learning_rate=5e-5,
--num_train_epochs=3.0.
--max_seq_length=384,
--doc_stride=128,
--max_query_length=64,
--predict_batch_size=8
```
I wonder
1. If src pretrained model and parameters are the same, results of pytorch version and tensorflow version are too near score, perhaps F1:88, EM:80. Is it right?
2. If 1. is true, my parameter or procedure is wrong.Where should I see?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/637/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/636 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/636/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/636/comments | https://api.github.com/repos/huggingface/transformers/issues/636/events | https://github.com/huggingface/transformers/issues/636 | 447,855,279 | MDU6SXNzdWU0NDc4NTUyNzk= | 636 | Training Dataset of GPT-2 | {
"login": "ngarneau",
"id": 665101,
"node_id": "MDQ6VXNlcjY2NTEwMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/665101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ngarneau",
"html_url": "https://github.com/ngarneau",
"followers_url": "https://api.github.com/users/ngarneau/followers",
"following_url": "https://api.github.com/users/ngarneau/following{/other_user}",
"gists_url": "https://api.github.com/users/ngarneau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ngarneau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ngarneau/subscriptions",
"organizations_url": "https://api.github.com/users/ngarneau/orgs",
"repos_url": "https://api.github.com/users/ngarneau/repos",
"events_url": "https://api.github.com/users/ngarneau/events{/privacy}",
"received_events_url": "https://api.github.com/users/ngarneau/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The pretrained model is trained on the WebText dataset. It's a collection of documents from outgoing Reddit links that have above 3 karma (ensures quality).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,566 | 1,566 | CONTRIBUTOR | null | What is the training dataset for the pre-trained GPT-2 Model? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/636/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/635 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/635/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/635/comments | https://api.github.com/repos/huggingface/transformers/issues/635/events | https://github.com/huggingface/transformers/issues/635 | 447,540,290 | MDU6SXNzdWU0NDc1NDAyOTA= | 635 | GPT2 - support data type torch.DoubleTensor for Position embedding | {
"login": "adigoryl",
"id": 31667817,
"node_id": "MDQ6VXNlcjMxNjY3ODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/31667817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adigoryl",
"html_url": "https://github.com/adigoryl",
"followers_url": "https://api.github.com/users/adigoryl/followers",
"following_url": "https://api.github.com/users/adigoryl/following{/other_user}",
"gists_url": "https://api.github.com/users/adigoryl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adigoryl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adigoryl/subscriptions",
"organizations_url": "https://api.github.com/users/adigoryl/orgs",
"repos_url": "https://api.github.com/users/adigoryl/repos",
"events_url": "https://api.github.com/users/adigoryl/events{/privacy}",
"received_events_url": "https://api.github.com/users/adigoryl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, we won't be able to add these feature as they would render the PyTorch model not compatible with the pretrained model open-sourced by OpenAI.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,565 | 1,565 | NONE | null | Hi,
Could the team add support for floating data types for the position embedding? Currently, it only allows torch.LongTensor between 0 - config.n_positions - 1 and must be of the same shape as the input. I see this a restriction as one may use floating values between e.g. 0-2 to represent the sequence.
What would be even better, if the team could add build-in position encoding techniques like the one used in Attention Is All You Need paper, where they use cos and sin:
PE(pos,2i) = sin(pos/10000**(2i/dmodel))
PE(pos,2i+1) = cos(pos/10000**(2i/dmodel))
@thomwolf Could you please share your thoughts with me on that?
Regards,
Adrian.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/635/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/634 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/634/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/634/comments | https://api.github.com/repos/huggingface/transformers/issues/634/events | https://github.com/huggingface/transformers/issues/634 | 447,526,194 | MDU6SXNzdWU0NDc1MjYxOTQ= | 634 | convert_tf_checkpoint_to_pytorch get different result? | {
"login": "light8lee",
"id": 13532435,
"node_id": "MDQ6VXNlcjEzNTMyNDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/13532435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/light8lee",
"html_url": "https://github.com/light8lee",
"followers_url": "https://api.github.com/users/light8lee/followers",
"following_url": "https://api.github.com/users/light8lee/following{/other_user}",
"gists_url": "https://api.github.com/users/light8lee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/light8lee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/light8lee/subscriptions",
"organizations_url": "https://api.github.com/users/light8lee/orgs",
"repos_url": "https://api.github.com/users/light8lee/repos",
"events_url": "https://api.github.com/users/light8lee/events{/privacy}",
"received_events_url": "https://api.github.com/users/light8lee/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,564 | 1,564 | NONE | null | I'm using this repo to do Chinese NER on MSRA dataset, when I use the pretrained model `bert-base-chinese ` , the result is very good, it can reach 0.93+ f1 on test set in first epoch. But when I used `convert_tf_checkpoint_to_pytorch` to convert the original bert released checkpoint `chinese_L-12_H-768_A-12` to pytorch_model.bin, the result is very bad, it can only reach 0.5 f1 score on test set. Is there anything I do wrong?
And when I used `BertForPreTraining.from_pretrained`, the result is alright. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/634/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/633 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/633/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/633/comments | https://api.github.com/repos/huggingface/transformers/issues/633/events | https://github.com/huggingface/transformers/issues/633 | 447,481,425 | MDU6SXNzdWU0NDc0ODE0MjU= | 633 | bert->onnx ->caffe2 weird error | {
"login": "maeotaku",
"id": 5332911,
"node_id": "MDQ6VXNlcjUzMzI5MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5332911?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maeotaku",
"html_url": "https://github.com/maeotaku",
"followers_url": "https://api.github.com/users/maeotaku/followers",
"following_url": "https://api.github.com/users/maeotaku/following{/other_user}",
"gists_url": "https://api.github.com/users/maeotaku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maeotaku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maeotaku/subscriptions",
"organizations_url": "https://api.github.com/users/maeotaku/orgs",
"repos_url": "https://api.github.com/users/maeotaku/repos",
"events_url": "https://api.github.com/users/maeotaku/events{/privacy}",
"received_events_url": "https://api.github.com/users/maeotaku/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@maeotaku were you ever able to figure this out? I'd be curious to see what numbers you were seeing when running in caffe2.\r\n\r\nIf you didn't figure this out, seems similar this [pytorch issue](https://github.com/pytorch/pytorch/issues/18475)"
] | 1,558 | 1,570 | 1,564 | NONE | null | So really not sure if i should post this here but im having this problem with the pretrained bert for seq classification in particular when i try to consume the ONNX version of the model with Caffe2, I get this output:
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/onnx/workspace.py", line 63, in f
return getattr(workspace, attr)(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/workspace.py", line 250, in RunNet
StringifyNetName(name), num_iter, allow_fail,
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/workspace.py", line 211, in CallWithExceptionIntercept
return func(*args, **kwargs)
RuntimeError: [enforce fail at pow_op.h:100] A.sizes() == B.sizes(). [4, 512, 768] vs []. Dimension mismatch - did you forget to set broadcast=1?
Error from operator:
input: "222" input: "223" output: "224" name: "" type: "Pow" device_option { device_type: 1 device_id: 3 }frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f9c8cdaf441 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #1: c10::ThrowEnforceNotMet(char const*, int, char const*, std::string const&, void const*) + 0x49 (0x7f9c8cdaf259 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #2: <unknown function> + 0x2b63861 (0x7f9c44eed861 in /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2_gpu.so)
frame #3: <unknown function> + 0x15a3555 (0x7f9c4392d555 in /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2_gpu.so)
frame #4: caffe2::SimpleNet::Run() + 0x161 (0x7f9c396ac101 in /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2.so)
frame #5: caffe2::Workspace::RunNet(std::string const&) + 0x3a (0x7f9c396e35aa in /usr/local/lib/python3.6/dist-packages/torch/lib/libcaffe2.so)
frame #6: <unknown function> + 0x4e38a (0x7f9bbe6fd38a in /usr/local/lib/python3.6/dist-packages/caffe2/python/caffe2_pybind11_state_gpu.cpython-36m-x86_64-linux-gnu.so)
frame #7: <unknown function> + 0x9368e (0x7f9bbe74268e in /usr/local/lib/python3.6/dist-packages/caffe2/python/caffe2_pybind11_state_gpu.cpython-36m-x86_64-linux-gnu.so)
frame #8: PyCFunction_Call + 0xf9 (0x4aeb29 in /usr/bin/python3)
frame #9: _PyEval_EvalFrameDefault + 0x7e42 (0x54d092 in /usr/bin/python3)
frame #10: /usr/bin/python3() [0x543f21]
frame #11: /usr/bin/python3() [0x54421f]
frame #12: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #13: /usr/bin/python3() [0x543f21]
frame #14: PyEval_EvalCodeEx + 0x6d (0x544cfd in /usr/bin/python3)
frame #15: /usr/bin/python3() [0x485857]
frame #16: PyObject_Call + 0x60 (0x4557a0 in /usr/bin/python3)
frame #17: _PyEval_EvalFrameDefault + 0x19e8 (0x546c38 in /usr/bin/python3)
frame #18: /usr/bin/python3() [0x543f21]
frame #19: /usr/bin/python3() [0x54421f]
frame #20: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #21: /usr/bin/python3() [0x543f21]
frame #22: /usr/bin/python3() [0x54421f]
frame #23: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #24: /usr/bin/python3() [0x5432b1]
frame #25: /usr/bin/python3() [0x544447]
frame #26: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #27: /usr/bin/python3() [0x5432b1]
frame #28: /usr/bin/python3() [0x544447]
frame #29: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #30: /usr/bin/python3() [0x543f21]
frame #31: PyEval_EvalCodeEx + 0x6d (0x544cfd in /usr/bin/python3)
frame #32: /usr/bin/python3() [0x485857]
frame #33: PyObject_Call + 0x60 (0x4557a0 in /usr/bin/python3)
frame #34: _PyEval_EvalFrameDefault + 0x19e8 (0x546c38 in /usr/bin/python3)
frame #35: /usr/bin/python3() [0x543f21]
frame #36: PyEval_EvalCodeEx + 0x6d (0x544cfd in /usr/bin/python3)
frame #37: /usr/bin/python3() [0x485857]
frame #38: PyObject_Call + 0x60 (0x4557a0 in /usr/bin/python3)
frame #39: _PyEval_EvalFrameDefault + 0x19e8 (0x546c38 in /usr/bin/python3)
frame #40: /usr/bin/python3() [0x5432b1]
frame #41: /usr/bin/python3() [0x544447]
frame #42: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #43: /usr/bin/python3() [0x5432b1]
frame #44: /usr/bin/python3() [0x544447]
frame #45: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #46: /usr/bin/python3() [0x5432b1]
frame #47: _PyFunction_FastCallDict + 0x236 (0x54d8c6 in /usr/bin/python3)
frame #48: _PyObject_FastCallDict + 0x1ef (0x455acf in /usr/bin/python3)
frame #49: _PyObject_Call_Prepend + 0xcb (0x455bcb in /usr/bin/python3)
frame #50: PyObject_Call + 0x60 (0x4557a0 in /usr/bin/python3)
frame #51: /usr/bin/python3() [0x4c9d13]
frame #52: _PyObject_FastCallDict + 0xa2 (0x455982 in /usr/bin/python3)
frame #53: /usr/bin/python3() [0x544075]
frame #54: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #55: /usr/bin/python3() [0x5432b1]
frame #56: /usr/bin/python3() [0x544447]
frame #57: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #58: /usr/bin/python3() [0x5432b1]
frame #59: /usr/bin/python3() [0x544447]
frame #60: _PyEval_EvalFrameDefault + 0xc5b (0x545eab in /usr/bin/python3)
frame #61: /usr/bin/python3() [0x5432b1]
frame #62: _PyFunction_FastCallDict + 0x236 (0x54d8c6 in /usr/bin/python3)
frame #63: _PyObject_FastCallDict + 0x1ef (0x455acf in /usr/bin/python3)
Does any of you know if the pretrained model is using something not supported by Caffe2?
I have also tried with several tensor shapes ( like (1, 512), (1, 128), (1,512, 786) in both long anf float with no luck. Also i used (4, 512), (4,128), (4,512,768) just in case since my input when i exported the ONNX file used some 4 samples.
Any pointers would be highly appreciated :)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/633/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/632 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/632/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/632/comments | https://api.github.com/repos/huggingface/transformers/issues/632/events | https://github.com/huggingface/transformers/issues/632 | 447,152,139 | MDU6SXNzdWU0NDcxNTIxMzk= | 632 | run_classifier.py:TypeError: forward() missing 1 required positional argument: 'input_ids' | {
"login": "SlinZhang",
"id": 35492662,
"node_id": "MDQ6VXNlcjM1NDkyNjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/35492662?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SlinZhang",
"html_url": "https://github.com/SlinZhang",
"followers_url": "https://api.github.com/users/SlinZhang/followers",
"following_url": "https://api.github.com/users/SlinZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/SlinZhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SlinZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SlinZhang/subscriptions",
"organizations_url": "https://api.github.com/users/SlinZhang/orgs",
"repos_url": "https://api.github.com/users/SlinZhang/repos",
"events_url": "https://api.github.com/users/SlinZhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/SlinZhang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I ran into the same error and fixed it by using this instead:\r\n\r\n[Pull Request](https://github.com/huggingface/pytorch-pretrained-BERT/pull/604)\r\n\r\nHaven't tried my full dataset yet but on a slice, it worked well!\r\n\r\nEdit: On the full dataset I still get the error.\r\n\r\nEdit2: \r\nChange the train data loader to drop the last \"incomplete\" batch as a workaround:\r\n\r\n> train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=args.train_batch_size, drop_last=True)",
"Same error. I solved this problem by add \"CUDA_VISIBLE_DEVICES=0\" in command line.\r\nI guess this problem caused by \"DataParallel\" which splits and sends batch to GPUs. In some case, batch_size is less than the number of GPU. In consequence, model on extra gpu just get an empty input.\r\nBTW,you should use @AlanHassen 's solution in training stage.",
"same error when use my dataset. File \"run_classifier.py\", TypeError: forward() missing 1 required positional argument: 'input_ids'.\r\n\r\nAdd:\r\ni change the default eval_batch_size from 8 to 12, it can avoid this error.\r\nyou can change the eval_batch_size to some other number.\r\n\r\nof course, this bug should finally be solved by code. \r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,567 | 1,567 | NONE | null | when I use my dataset, forward() missing 1 required positional argument: 'input_ids' ,but I can get "input_ids, input_mask, segment_ids, label_ids" in batch. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/632/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/631 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/631/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/631/comments | https://api.github.com/repos/huggingface/transformers/issues/631/events | https://github.com/huggingface/transformers/issues/631 | 447,079,535 | MDU6SXNzdWU0NDcwNzk1MzU= | 631 | from_pretrained | {
"login": "steindor",
"id": 3185711,
"node_id": "MDQ6VXNlcjMxODU3MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3185711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/steindor",
"html_url": "https://github.com/steindor",
"followers_url": "https://api.github.com/users/steindor/followers",
"following_url": "https://api.github.com/users/steindor/following{/other_user}",
"gists_url": "https://api.github.com/users/steindor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/steindor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/steindor/subscriptions",
"organizations_url": "https://api.github.com/users/steindor/orgs",
"repos_url": "https://api.github.com/users/steindor/repos",
"events_url": "https://api.github.com/users/steindor/events{/privacy}",
"received_events_url": "https://api.github.com/users/steindor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Did you try to load the model following the best-practices indicated here: https://github.com/huggingface/pytorch-pretrained-BERT#serialization-best-practices",
"All but load the tokenizer from the vocab file. Think that would make a large difference?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,564 | 1,564 | NONE | null | I have a question regarding the from_pretrained method since I experienced a bit unexpected behaviour. It's regarding how to save a BERT classifier model as a whole.
I am experimenting with a classifier on top of BERT on the stack overflow question / tags dataset which contains 20 classes of 40.000 text samples. I trained a classifier on 80% of the data for 3 epochs and saved the model in the following and recommended way:
```
def save_model(model):
# Save a trained model, configuration and tokenizer
model_to_save = model.module if hasattr(model, 'module') else model # Only save the model it-self
# If we save using the predefined names, we can load using `from_pretrained`
output_model_file = os.path.join("./", "pytorch_model.bin")
output_config_file = os.path.join("./", "bert_config.json")
torch.save(model_to_save.state_dict(), output_model_file)
model_to_save.config.to_json_file(output_config_file)
tokenizer.save_vocabulary("./")
```
This gave my a a model .bin file and a config file.
The training accuracy was around 90% after the last epoch on 32.000 training samples, leaving 8.000 samples for evaluation. I then instantiated a new BERT model with from_pretrained method with state_dict as False and ran the evaluation which surprisingly gave these results:
{'eval_loss': 9.04939697444439, 'eval_accuracy': 0.036875}
I ran through the from_pretrained method and saw that the .bin file is a PyTorch dump of a BertForPreTraining instance which I presume means that the classifier weights are not saved when saving this way?
Then it would probably be necessary to pass the state_dict parameter when loading a classifier model with from_pretrained? If so, is it necessary to pass two files (.bin and state_dict as model_state) or is there another way to do this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/631/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/630 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/630/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/630/comments | https://api.github.com/repos/huggingface/transformers/issues/630/events | https://github.com/huggingface/transformers/pull/630 | 447,029,680 | MDExOlB1bGxSZXF1ZXN0MjgxMTA4NTIx | 630 | Update run_squad.py | {
"login": "tguens",
"id": 50817608,
"node_id": "MDQ6VXNlcjUwODE3NjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/50817608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tguens",
"html_url": "https://github.com/tguens",
"followers_url": "https://api.github.com/users/tguens/followers",
"following_url": "https://api.github.com/users/tguens/following{/other_user}",
"gists_url": "https://api.github.com/users/tguens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tguens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tguens/subscriptions",
"organizations_url": "https://api.github.com/users/tguens/orgs",
"repos_url": "https://api.github.com/users/tguens/repos",
"events_url": "https://api.github.com/users/tguens/events{/privacy}",
"received_events_url": "https://api.github.com/users/tguens/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,558 | 1,560 | 1,560 | CONTRIBUTOR | null | Indentation change so that the output "nbest_predictions.json" is not empty. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/630/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/630",
"html_url": "https://github.com/huggingface/transformers/pull/630",
"diff_url": "https://github.com/huggingface/transformers/pull/630.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/630.patch",
"merged_at": 1560524186000
} |
https://api.github.com/repos/huggingface/transformers/issues/629 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/629/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/629/comments | https://api.github.com/repos/huggingface/transformers/issues/629/events | https://github.com/huggingface/transformers/issues/629 | 446,952,695 | MDU6SXNzdWU0NDY5NTI2OTU= | 629 | Is loss.mean() needed? | {
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This line is used when people use multi-gpu in a single python process (parallel instead of distributed). This is not the recommended setting (distributed is usually faster).\r\n\r\nI wrote a blog post on this (parallel/distributed and the like) a few months ago: https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255",
"So on a multi-GPU machine, if we run `run_classifier.py`, by default it uses the distributed setting, correct? I'm assuming the answer is yes because when I ran this code on a 2 gpu machine, the `loss` was already a single-element tensor before the `.mean()` call.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,564 | 1,564 | CONTRIBUTOR | null | In `run_classifier.py`, there is a:
https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/examples/run_classifier.py#L841-L842
However, a couple of lines higher, the logits are already flattened
https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/examples/run_classifier.py#L834-L839
So I assume the loss returned will only be a single-element tensor, correct? I also tested this code on 2 gpus and there indeed is only one single-element tensor. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/629/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/628 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/628/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/628/comments | https://api.github.com/repos/huggingface/transformers/issues/628/events | https://github.com/huggingface/transformers/issues/628 | 446,726,036 | MDU6SXNzdWU0NDY3MjYwMzY= | 628 | IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) | {
"login": "guhur",
"id": 12297742,
"node_id": "MDQ6VXNlcjEyMjk3NzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/12297742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guhur",
"html_url": "https://github.com/guhur",
"followers_url": "https://api.github.com/users/guhur/followers",
"following_url": "https://api.github.com/users/guhur/following{/other_user}",
"gists_url": "https://api.github.com/users/guhur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guhur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guhur/subscriptions",
"organizations_url": "https://api.github.com/users/guhur/orgs",
"repos_url": "https://api.github.com/users/guhur/repos",
"events_url": "https://api.github.com/users/guhur/events{/privacy}",
"received_events_url": "https://api.github.com/users/guhur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oops... I forgot to \"batch\" the input...\r\n\r\nHere is a working sample:\r\n\r\n```\r\nfrom pytorch_pretrained_bert.modeling import BertModel\r\nfrom pytorch_pretrained_bert.tokenization import BertTokenizer\r\nimport torch\r\n\r\nembed = BertModel.from_pretrained('bert-base-uncased')\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n\r\nsentence = \"the red cube is at your left\"\r\ntokens = [\"[CLS]\"] + tokenizer.tokenize(sentence) + [\"[SEP]\"] \r\ninput_ids = torch.tensor(tokenizer.convert_tokens_to_ids(tokens))\r\n\r\nembed(input_ids.unsqueeze(0))\r\n```",
"> Oops... I forgot to \"batch\" the input...\r\n> \r\n> Here is a working sample:\r\n> \r\n> ```\r\n> from pytorch_pretrained_bert.modeling import BertModel\r\n> from pytorch_pretrained_bert.tokenization import BertTokenizer\r\n> import torch\r\n> \r\n> embed = BertModel.from_pretrained('bert-base-uncased')\r\n> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> \r\n> sentence = \"the red cube is at your left\"\r\n> tokens = [\"[CLS]\"] + tokenizer.tokenize(sentence) + [\"[SEP]\"] \r\n> input_ids = torch.tensor(tokenizer.convert_tokens_to_ids(tokens))\r\n> \r\n> embed(input_ids.unsqueeze(0))\r\n> ```\r\n\r\nThank u so much, u helped me a lot."
] | 1,558 | 1,573 | 1,558 | CONTRIBUTOR | null | A simple call to BertModel does not work well here.
Here is a minimal code example:
```
from pytorch_pretrained_bert.modeling import BertModel
from pytorch_pretrained_bert.tokenization import BertTokenizer
import torch
embed = BertModel.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
sentence = "the red cube is at your left"
tokens = ["[CLS]"] + tokenizer.tokenize(sentence) + ["[SEP]"]
input_ids = torch.tensor(tokenizer.convert_tokens_to_ids(tokens))
print(input_ids)
embed(input_ids)
```
I obtained the following error:
> tensor([ 101, 1996, 2417, 14291, 2003, 2012, 2115, 2187, 102])
> ---------------------------------------------------------------------------
> IndexError Traceback (most recent call last)
> <ipython-input-3-66d7a2bcfb96> in <module>
> 11
> 12 print(input_ids)
> ---> 13 embed(input_ids)
>
> ~/.pyenv/versions/3.6.7/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
> 491 result = self._slow_forward(*input, **kwargs)
> 492 else:
> --> 493 result = self.forward(*input, **kwargs)
> 494 for hook in self._forward_hooks.values():
> 495 hook_result = hook(self, input, result)
>
> ~/src/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py in forward(self, input_ids, token_type_ids, attention_mask, output_all_encoded_layers)
> 731 extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
> 732
> --> 733 embedding_output = self.embeddings(input_ids, token_type_ids)
> 734 encoded_layers = self.encoder(embedding_output,
> 735 extended_attention_mask,
>
> ~/.pyenv/versions/3.6.7/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
> 491 result = self._slow_forward(*input, **kwargs)
> 492 else:
> --> 493 result = self.forward(*input, **kwargs)
> 494 for hook in self._forward_hooks.values():
> 495 hook_result = hook(self, input, result)
>
> ~/src/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py in forward(self, input_ids, token_type_ids)
> 262
> 263 def forward(self, input_ids, token_type_ids=None):
> --> 264 seq_length = input_ids.size(1)
> 265 position_ids = torch.arange(seq_length, dtype=torch.long, device=input_ids.device)
> 266 position_ids = position_ids.unsqueeze(0).expand_as(input_ids)
>
> IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
I am using Python 3.6.7 and the code on the master branch. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/628/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/627 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/627/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/627/comments | https://api.github.com/repos/huggingface/transformers/issues/627/events | https://github.com/huggingface/transformers/issues/627 | 446,551,658 | MDU6SXNzdWU0NDY1NTE2NTg= | 627 | BERT QnA is not matching correct answer when document is in QnA format | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"have you tried using your own data to train the model rather than using the squad1.1 or squad 2.0 data?\r\nI am doing QnA system as well, I have my own data and I split them into train, dev and test data, then use the train and dev data to train the model, eventually it works ok on the test data.\r\nbecause I am building the QnA system for my own task, not squad task, so I trained the model by using own data.\r\n",
"Thanks for reply and confirming that it works. @mushro00om May I know how much is your training data? (Building a large QnA training corpus is a challenge). ",
"@SandeepBhutani \r\nMy training data contain 140k questions, my dev data contain 5k questions, I haven't tried using the whole 140k to train yet because its a bit too large, I took 50k out of 140k to train, and it took roughly 3 hours.",
"@mushro00om : Thats a huge training data set. Unfortunately creating this corpus takes a lot of effort. So we are trying to ask same question that is there as it is in the document, based on vanilla bert-uncased (or squad trained too). ",
"@SandeepBhutani Wish you good luck !",
"Hey @SandeepBhutani, I am facing the exact same issue. Did you solve it? I would like to talk to you about the same. ",
"Hi.. Not yet\n\nOn Tue, 18 Jun, 2019, 4:21 PM dhruvkp090, <[email protected]> wrote:\n\n> Hey @SandeepBhutani <https://github.com/SandeepBhutani>, I am facing the\n> exact same issue. Did you solve it? I would like to talk to you about the\n> same.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-pretrained-BERT/issues/627?email_source=notifications&email_token=AHRBKIBNACCIYZFQWCOTNKLP3C43FA5CNFSM4HOJYYQKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODX57T3Q#issuecomment-503052782>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AHRBKIEHHKK3S3IFTCVWWNDP3C43FANCNFSM4HOJYYQA>\n> .\n>\n",
"@SandeepBhutani is there any way we can connect? I would like to talk to you about this..",
"Please email me your contact details at [email protected]\n\nOn Tue, 18 Jun, 2019, 4:28 PM dhruvkp090, <[email protected]> wrote:\n\n> @SandeepBhutani <https://github.com/SandeepBhutani> is there any way we\n> can connect? I would like to talk to you about this..\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-pretrained-BERT/issues/627?email_source=notifications&email_token=AHRBKIAJFWSHZTLRDSGZNSDP3C5UVA5CNFSM4HOJYYQKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODX6ADIY#issuecomment-503054755>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AHRBKIDPOVXVESF72DCFYWLP3C5UVANCNFSM4HOJYYQA>\n> .\n>\n",
"@SandeepBhutani, I sent my details.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,566 | 1,566 | NONE | null | I have BERT trained on Squad (and without trained as well) . My documents contains question and big answers. When we try to ask question as it is and BERT to find that question within document, then it gives some arbitrary answer from some other page of document.
What can be wrong? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/627/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/626 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/626/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/626/comments | https://api.github.com/repos/huggingface/transformers/issues/626/events | https://github.com/huggingface/transformers/issues/626 | 446,529,444 | MDU6SXNzdWU0NDY1Mjk0NDQ= | 626 | How to use run_squad.py to produce multiple answers for a question? | {
"login": "mushro00om",
"id": 47710099,
"node_id": "MDQ6VXNlcjQ3NzEwMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/47710099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mushro00om",
"html_url": "https://github.com/mushro00om",
"followers_url": "https://api.github.com/users/mushro00om/followers",
"following_url": "https://api.github.com/users/mushro00om/following{/other_user}",
"gists_url": "https://api.github.com/users/mushro00om/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mushro00om/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mushro00om/subscriptions",
"organizations_url": "https://api.github.com/users/mushro00om/orgs",
"repos_url": "https://api.github.com/users/mushro00om/repos",
"events_url": "https://api.github.com/users/mushro00om/events{/privacy}",
"received_events_url": "https://api.github.com/users/mushro00om/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"soloved",
"Can you share how did you solve that problem?",
"@armheb \r\nSure, some questions in my dataset have multiple answers, some have one answer, some no answer.\r\n\r\nFirstly, I add a for loop in the \"read_squad_example\" method to allow the code to read all answers for each question and build N SquadExamples for each question, N is the number of answers (This is for my case, you don't have to do it, because I need to use all answers, the original squad code only reads the first answer of each question even the question has multiple answers).\r\n\r\nThe run_squad.py produces a \"nbest_predictions.json\" file, you can see the model provides top 20 possible answers for each question, with possibilities, so I just simply pick some of those answers according to their possibilities.\r\n\r\nHowever, I have to admit that eventually the performance isn't that good. it works but just not that good, but I think it can be improved by some way.",
"@mushro00om \r\nHi, \r\nCan you give sample codes for how you used your model for prediction given a text corpus and a question?",
"@Swathygsb Hi, sorry for late reply. Actually the code script I used is not this Pytorch version, I used the Tensorflow version provided by Google, because it is much more easier and they provide very clear guidance, Here is the link:\r\nhttps://github.com/google-research/bert\r\nThe most of the code remained unchanged, I basically modified the read_squad_examples method to allow process multiple answers (in my task, a question may have more than one answer, the original code can only process one answer for each question). \r\nSo if all your questions have only one particular answer, you can simply follow the guidance, or if your questions may have more than one answer, you can give me your email and i can send my code to you.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,569 | 1,569 | NONE | null | Hello,
I am using run_squad.py to build my own question answering system, the problem is that, I want the system can output multiple answers for a question.
The number of answers can be 0, or one, or multiple if possible, how can I do to the code to achieve this? Thank you | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/626/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/625 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/625/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/625/comments | https://api.github.com/repos/huggingface/transformers/issues/625/events | https://github.com/huggingface/transformers/issues/625 | 446,203,941 | MDU6SXNzdWU0NDYyMDM5NDE= | 625 | Tried to visualize the CLS Token embeddings after fine-tuning on SST-2 using t-SNE, but no clear clustered visualizations of positive and negative sentences ! | {
"login": "rsc90",
"id": 50839274,
"node_id": "MDQ6VXNlcjUwODM5Mjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/50839274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rsc90",
"html_url": "https://github.com/rsc90",
"followers_url": "https://api.github.com/users/rsc90/followers",
"following_url": "https://api.github.com/users/rsc90/following{/other_user}",
"gists_url": "https://api.github.com/users/rsc90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rsc90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rsc90/subscriptions",
"organizations_url": "https://api.github.com/users/rsc90/orgs",
"repos_url": "https://api.github.com/users/rsc90/repos",
"events_url": "https://api.github.com/users/rsc90/events{/privacy}",
"received_events_url": "https://api.github.com/users/rsc90/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @rsc90, \r\nThe `BertForSequenceClassification` model use a [linear layer](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L994) on top of Bert's `pooled_output` which is a [small feed-forward layer with a tanh activation](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L417-L429).\r\n\r\nI would rather use the output of Bert's `pooled_output` or the last hidden-layer for what you are doing. Why do you use layer -2?",
"Hello @thomwolf ,\r\nThank you. I had done with last layer as well but even the clusters were not clear as shown in below fig. \r\nI had read that last layer would be sometimes biased so i didn't, but well i experimented that as well. \r\n\r\n\r\n\r\nOk. could you let me know how to collect this pooled output for sentence representations after finetuning ?\r\n\r\nbest,\r\ntinya",
"You can try to initialize a `BertModel` from your fine-tuned `BertForSequenceClassification` model (I hope you fine-tuned a model otherwise it's normal the representations are not adapted to your task).\r\n\r\nJust do `model = BertModel.from_pretrained('path-to-your-fine-tuned-model')`.\r\nAnd then use the pooled output of your `BertModel`.\r\n\r\nStill not sure what you are trying to do here in general by the way.",
"Yeah idid the same, like:\r\n1. I used the run_classifier.py on SST-2 dataset, saved the model**( fine_tuned_model)**\r\n2. Used **fine_tuned_model** in extract_features.py and collected this output.jsonl (as you said here )\r\n3. From json file i plot the vectors corresponding to CLS embeddings using t-SNE \r\n\r\nIntention of the experiment is, if CLS tokens were carrying representations of sentence on downstream tasks, then i was expecting something like representations what we get when we plot MNSIT data using t-SNE. (I just want to make sure whether CLS is carrying the complete sentence representation after finetuning on downstream task, if so then why i am not getting separate clusters )\r\n\r\n\r\nPlease correct me, if i am missing something or do you suggest some other experiments to verify my thoughts.\r\n\r\nMany thanks,\r\ntinya\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any updates on this issue?"
] | 1,558 | 1,606 | 1,564 | NONE | null | I have used run_classifier.py to finetune the model on SST-2 data, and used this model in the extract_features.py to extract the embeddings of some sentences(fed only sentences-input.txt). Later used these features from .jsonl file and used the vectors of layer -2, corresponding to CLS token and tried to visualize using t-SNE to some see clear separations between the positive and negative sentences. But i could not get any clear clusters.
so, my questions are:
que.1: Does CLS tokens after finetuning represents the entire sentence ? , so that one can use them on downstream tasks .
que.2: What is the best way to know that, the CLS tokens after fine-tuning is carrying the sentence representation ? (For Example: I tried to visualize using t-SNE)
que.3: I used those CLS tokens vectors in scikit-learn (Naive-bayes) models as well, but i got the accuracy of around 50%, but BERT uses same vectors in the evaluation and achieves 93% accuracy , how does it possible ? Is my approach in checking the CLS token vectors is wrong ?
The following figure shows the visualizations of CLS vectors using t-SNE along with corresponding labels of the sentences (vectors from -2 layer are used for plot)
It would be great if @thomwolf have a look at this issue too.
looking forward for suggestions from all folks around here!
best,
Tinya

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/625/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/624 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/624/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/624/comments | https://api.github.com/repos/huggingface/transformers/issues/624/events | https://github.com/huggingface/transformers/issues/624 | 446,112,582 | MDU6SXNzdWU0NDYxMTI1ODI= | 624 | tokenization_gpt2.py - on python 2 you can use backports.functools_lru_cache package from pypi | {
"login": "philip-bl",
"id": 6838251,
"node_id": "MDQ6VXNlcjY4MzgyNTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6838251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philip-bl",
"html_url": "https://github.com/philip-bl",
"followers_url": "https://api.github.com/users/philip-bl/followers",
"following_url": "https://api.github.com/users/philip-bl/following{/other_user}",
"gists_url": "https://api.github.com/users/philip-bl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philip-bl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philip-bl/subscriptions",
"organizations_url": "https://api.github.com/users/philip-bl/orgs",
"repos_url": "https://api.github.com/users/philip-bl/repos",
"events_url": "https://api.github.com/users/philip-bl/events{/privacy}",
"received_events_url": "https://api.github.com/users/philip-bl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,564 | 1,564 | NONE | null | See https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization_gpt2.py#L28. Instead of not using `lru_cache` you can use this package. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/624/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/623 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/623/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/623/comments | https://api.github.com/repos/huggingface/transformers/issues/623/events | https://github.com/huggingface/transformers/issues/623 | 445,914,304 | MDU6SXNzdWU0NDU5MTQzMDQ= | 623 | Integration with a retriever Model | {
"login": "rajo19",
"id": 33960895,
"node_id": "MDQ6VXNlcjMzOTYwODk1",
"avatar_url": "https://avatars.githubusercontent.com/u/33960895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajo19",
"html_url": "https://github.com/rajo19",
"followers_url": "https://api.github.com/users/rajo19/followers",
"following_url": "https://api.github.com/users/rajo19/following{/other_user}",
"gists_url": "https://api.github.com/users/rajo19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajo19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajo19/subscriptions",
"organizations_url": "https://api.github.com/users/rajo19/orgs",
"repos_url": "https://api.github.com/users/rajo19/repos",
"events_url": "https://api.github.com/users/rajo19/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajo19/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Have a look at the ParlAI library and in particular [these great models based on BERT](https://github.com/facebookresearch/ParlAI/pull/1331) by @samhumeau.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hey @rajo19,\r\nif you want to do Question Answering at scale with a Retriever + Reader pipeline, it might be worth checking out our new [haystack](https://github.com/deepset-ai/haystack/) project. It builds upon transformers and you can use all the QA models from [here](https://huggingface.co/models) as a reader. ",
"Very interesting, thanks for sharing @tholor! cc @mfuntowicz "
] | 1,558 | 1,579 | 1,564 | NONE | null | How can I integrate BERT to a retriever model? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/623/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/622 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/622/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/622/comments | https://api.github.com/repos/huggingface/transformers/issues/622/events | https://github.com/huggingface/transformers/issues/622 | 445,884,076 | MDU6SXNzdWU0NDU4ODQwNzY= | 622 | In run_classifier.py, is "warmup_proportion" a fraction or percentage? | {
"login": "neurite",
"id": 797415,
"node_id": "MDQ6VXNlcjc5NzQxNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/797415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neurite",
"html_url": "https://github.com/neurite",
"followers_url": "https://api.github.com/users/neurite/followers",
"following_url": "https://api.github.com/users/neurite/following{/other_user}",
"gists_url": "https://api.github.com/users/neurite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neurite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neurite/subscriptions",
"organizations_url": "https://api.github.com/users/neurite/orgs",
"repos_url": "https://api.github.com/users/neurite/repos",
"events_url": "https://api.github.com/users/neurite/events{/privacy}",
"received_events_url": "https://api.github.com/users/neurite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Found the same problem. `0.1` means `10%` in [Google's TensorFlow implementation](https://github.com/google-research/bert/blob/d66a146741588fb208450bde15aa7db143baaa69/run_classifier.py#L92).",
"It's a fraction of total training like indicated in the help doc: `0.1 = 10% of training`",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,564 | 1,564 | NONE | null | In `run_classifier.py`, the arg parameter `--warmup_proportion` [help doc](https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/examples/run_classifier.py#L628) says, "E.g., 0.1 = 10%% of training.". Is it actually a percentage such that `0.1` => `0.1%` => `0.001`, which is indeed `10%%` as stated in the help doc? But throughout the code, it seems `0.1` is just fraction which is `0.1`, not `0.001`. Please clarify and fix the help doc if it is wrong. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/622/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/621 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/621/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/621/comments | https://api.github.com/repos/huggingface/transformers/issues/621/events | https://github.com/huggingface/transformers/issues/621 | 445,784,159 | MDU6SXNzdWU0NDU3ODQxNTk= | 621 | Question on duplicated sentence | {
"login": "youngminpark2559",
"id": 30585126,
"node_id": "MDQ6VXNlcjMwNTg1MTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/30585126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/youngminpark2559",
"html_url": "https://github.com/youngminpark2559",
"followers_url": "https://api.github.com/users/youngminpark2559/followers",
"following_url": "https://api.github.com/users/youngminpark2559/following{/other_user}",
"gists_url": "https://api.github.com/users/youngminpark2559/gists{/gist_id}",
"starred_url": "https://api.github.com/users/youngminpark2559/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/youngminpark2559/subscriptions",
"organizations_url": "https://api.github.com/users/youngminpark2559/orgs",
"repos_url": "https://api.github.com/users/youngminpark2559/repos",
"events_url": "https://api.github.com/users/youngminpark2559/events{/privacy}",
"received_events_url": "https://api.github.com/users/youngminpark2559/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Looks like a minor bug. However, it seems that simply removing this `else` statement may cause some problems according to the previous code logic.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,564 | 1,564 | NONE | null | Hi. I wonder whether they are unnecessary duplicated sentence or not.
When I run in "test mode", similar sentences are called twice.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py#L908
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py#L913
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py#L1042
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py#L1044
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/621/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/620 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/620/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/620/comments | https://api.github.com/repos/huggingface/transformers/issues/620/events | https://github.com/huggingface/transformers/pull/620 | 445,755,371 | MDExOlB1bGxSZXF1ZXN0MjgwMTI3MjAy | 620 | Convert pytorch models back to tensorflow | {
"login": "chrislarson1",
"id": 19593484,
"node_id": "MDQ6VXNlcjE5NTkzNDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/19593484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrislarson1",
"html_url": "https://github.com/chrislarson1",
"followers_url": "https://api.github.com/users/chrislarson1/followers",
"following_url": "https://api.github.com/users/chrislarson1/following{/other_user}",
"gists_url": "https://api.github.com/users/chrislarson1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrislarson1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrislarson1/subscriptions",
"organizations_url": "https://api.github.com/users/chrislarson1/orgs",
"repos_url": "https://api.github.com/users/chrislarson1/repos",
"events_url": "https://api.github.com/users/chrislarson1/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrislarson1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Changed filename from convert_hf_checkpoint_to_tf.py to convert_pytorch_checkpoint_to_tf.py for consistency.",
"I use this to convert the fine-tuned pytorch model to TF and convert this converted TF back to pytorch model. The prediction result seems incorrect with the converted-converted pytorch model. ",
"I compare the dumped stat_dict, it seems the differences lie in :\r\n\r\nencoder.layer.{d}.attention.self.query.weight\r\nencoder.layer.{d}.attention.self.key.weight\r\nencoder.layer.{d}.attention.self.value.weight",
"What model are you trying to convert @Qiuzhuang? Per the docstring, only the BertModel is currently supported. I will add support other models in the near future.",
"Hi @chrislarson1, I use BertModel as follows:\r\nBertModel(config).from_pretrained(pretrain_model_dir)\r\nwhere pretrain_model_dir is (domain-pretraining + task specific classifier) training model dir.",
"I write the sample test code read the pytorch model and tf-pytorch model via BertModel.\r\n\r\n\r\n",
"@chrislarson1 we need to transpose attention Q/K/V weight as well, here is the fixing:\r\n\r\nif any(attention_weight in var_name for attention_weight in [\"dense.weight\", \"attention.self.query\", \"attention.self.key\", \"attention.self.value\"]):\r\n torch_tensor = torch_tensor.T\r\n tf_tensor = assign_tf_var(tensor=torch_tensor, name=tf_name)",
"I am able to train pytorch fine-tuned model and then convert to tensorflow model for serving purpose. E.g. using bert-as-service. The results are consistent.",
"Thanks @Qiuzhuang, your change has been added.",
"Thanks, @chrislarson1 and @Qiuzhuang.\r\nThis feature in interesting indeed.\r\nI think it could be nice to add a check that output of the PyTorch and TensorFlow models are identical (on at least one example). For the Bert model, I did a simple notebook (see the notebook folder) but it can also be a script or a test.\r\nDo you think you can add something like that @chrislarson1 ?",
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620?src=pr&el=h1) Report\n> Merging [#620](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/3763f8944dc3fef8afb0c525a2ced8a04889c14f?src=pr&el=desc) will **decrease** coverage by `0.79%`.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #620 +/- ##\n=========================================\n- Coverage 68.23% 67.43% -0.8% \n=========================================\n Files 18 19 +1 \n Lines 3976 4023 +47 \n=========================================\n Hits 2713 2713 \n- Misses 1263 1310 +47\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...retrained\\_bert/convert\\_pytorch\\_checkpoint\\_to\\_tf.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvY29udmVydF9weXRvcmNoX2NoZWNrcG9pbnRfdG9fdGYucHk=) | `0% <0%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620?src=pr&el=footer). Last update [3763f89...716cc1c](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/620?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"No problem @thomwolf, I've added a notebook that checks correctness.",
"Ok great, let's merge it!",
"Hi, \r\n\r\nI am trying to write a converter for RobertaForSequenceClassification to tensorflow using this script as a guide. I had a question regarding this.\r\n\r\nWhy did we take a transpose of the layers here at all? Is it because of tensorflow treating its layers differently than pytorch?\r\n\r\nAlso, if the dense.weight layers are being transposed, then in s sequence classification model, will the out_proj layer need to be transposed as well?",
"Hi @justachetan, the answer lies in how linear transformations are represented in pytorch vs. tensorflow; they are not the same. In Pytorch, weights in a network **often** get wrapped in the torch.nn.Linear class, which store transposed versions of the weights that would get saved in tensorflow for an equivalent projection (see the example below).\r\n\r\n```\r\n>>> import numpy as np\r\n>>> import tensorflow as tf\r\n>>> import torch\r\n>>> from torch.functional import F\r\n>>> x = np.ones([10, 4])\r\n>>> W = np.ones([5, 4])\r\n>>> tf.matmul(x, W.T).shape\r\n(10, 5)\r\n>>> F.linear(torch.Tensor(x), torch.Tensor(W)).detach().numpy().shape\r\n(10, 5)\r\n```"
] | 1,558 | 1,570 | 1,562 | CONTRIBUTOR | null | Added a file that converts pytorch models that have been trained/finetuned back to tensorflow. This currently supports the base BERT models (uncased/cased); conversion for other BERT models will be added in the future. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/620/reactions",
"total_count": 7,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/620/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/620",
"html_url": "https://github.com/huggingface/transformers/pull/620",
"diff_url": "https://github.com/huggingface/transformers/pull/620.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/620.patch",
"merged_at": 1562320878000
} |
https://api.github.com/repos/huggingface/transformers/issues/619 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/619/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/619/comments | https://api.github.com/repos/huggingface/transformers/issues/619/events | https://github.com/huggingface/transformers/issues/619 | 445,747,651 | MDU6SXNzdWU0NDU3NDc2NTE= | 619 | Custom data, gradient explosion, accuracy is 0 | {
"login": "pbamotra",
"id": 6505165,
"node_id": "MDQ6VXNlcjY1MDUxNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6505165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pbamotra",
"html_url": "https://github.com/pbamotra",
"followers_url": "https://api.github.com/users/pbamotra/followers",
"following_url": "https://api.github.com/users/pbamotra/following{/other_user}",
"gists_url": "https://api.github.com/users/pbamotra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pbamotra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pbamotra/subscriptions",
"organizations_url": "https://api.github.com/users/pbamotra/orgs",
"repos_url": "https://api.github.com/users/pbamotra/repos",
"events_url": "https://api.github.com/users/pbamotra/events{/privacy}",
"received_events_url": "https://api.github.com/users/pbamotra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,563 | 1,563 | NONE | null | Hi,
I have 16000+ labels to predict using sequence classifier. I tried running the code with BertAdam (no gradient clipping) and low LR of 1e-5. But my loss doesn't not improve and the accuracy stays at zero. Gradient clipping doesn't help either. I've check my inputs and they are correct. Any help is welcome.
model: base uncased | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/619/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/618 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/618/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/618/comments | https://api.github.com/repos/huggingface/transformers/issues/618/events | https://github.com/huggingface/transformers/issues/618 | 445,703,262 | MDU6SXNzdWU0NDU3MDMyNjI= | 618 | Loss function of run_classifier.py takes in 2 inputs of different dimensions. | {
"login": "datduong",
"id": 10081048,
"node_id": "MDQ6VXNlcjEwMDgxMDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/10081048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/datduong",
"html_url": "https://github.com/datduong",
"followers_url": "https://api.github.com/users/datduong/followers",
"following_url": "https://api.github.com/users/datduong/following{/other_user}",
"gists_url": "https://api.github.com/users/datduong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/datduong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/datduong/subscriptions",
"organizations_url": "https://api.github.com/users/datduong/orgs",
"repos_url": "https://api.github.com/users/datduong/repos",
"events_url": "https://api.github.com/users/datduong/events{/privacy}",
"received_events_url": "https://api.github.com/users/datduong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing issue, because I pass in the num_labels as 1 instead of 2 for the QNLI task. I was thinking that giving 1 label is enough because the 2nd label can be inferred from the 1st one. "
] | 1,558 | 1,558 | 1,558 | NONE | null | I am having an error here https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L836
In this line, `loss = loss_fct(logits.view(-1, num_labels), label_ids.view(-1))`, suppose we have 2 labels (entailment vs. not_entailment like QNLI task), then,
`logits` is already in dimension batch_size x num_labels. Using `logits.view(-1, num_labels)` will convert logits into an array of length (2 times batch_size) x 1.
So, this logits does not match `label_ids.view(-1)` which is batch_size x 1.
Does anyone else see this error when running the code ?
Thanks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/618/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/617 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/617/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/617/comments | https://api.github.com/repos/huggingface/transformers/issues/617/events | https://github.com/huggingface/transformers/issues/617 | 445,383,313 | MDU6SXNzdWU0NDUzODMzMTM= | 617 | How to get the softmax probabilities from the TransfoXLLMModel | {
"login": "Shashi456",
"id": 18056781,
"node_id": "MDQ6VXNlcjE4MDU2Nzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/18056781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shashi456",
"html_url": "https://github.com/Shashi456",
"followers_url": "https://api.github.com/users/Shashi456/followers",
"following_url": "https://api.github.com/users/Shashi456/following{/other_user}",
"gists_url": "https://api.github.com/users/Shashi456/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shashi456/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shashi456/subscriptions",
"organizations_url": "https://api.github.com/users/Shashi456/orgs",
"repos_url": "https://api.github.com/users/Shashi456/repos",
"events_url": "https://api.github.com/users/Shashi456/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shashi456/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,563 | 1,563 | NONE | null | ```
A tuple of (last_hidden_state, new_mems)
`softmax_output`: output of the (adaptive) softmax:
if target is None:
Negative log likelihood of shape [batch_size, sequence_length]
else:
log probabilities of tokens, shape [batch_size, sequence_length, n_tokens]
`new_mems`: list (num layers) of updated mem states at the entry of each layer
each mem state is a torch.FloatTensor of size [self.config.mem_len, batch_size, self.config.d_model]
Note that the first two dimensions are transposed in `mems` with regards to `input_ids` and `target`
```
how do i get the negative log likelihood of a given sentence, like say modifying the example :
```
# Tokenized input
text_1 = "Who was Jim Henson ?"
text_2 = "Jim Henson was a puppeteer"
tokenized_text_1 = tokenizer.tokenize(text_1)
tokenized_text_2 = tokenizer.tokenize(text_2)
# Convert token to vocabulary indices
indexed_tokens_1 = tokenizer.convert_tokens_to_ids(tokenized_text_1)
indexed_tokens_2 = tokenizer.convert_tokens_to_ids(tokenized_text_2)
# Convert inputs to PyTorch tensors
tokens_tensor_1 = torch.tensor([indexed_tokens_1])
tokens_tensor_2 = torch.tensor([indexed_tokens_2])
# Load pre-trained model (weights)
model = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')
model.eval()
# If you have a GPU, put everything on cuda
tokens_tensor_1 = tokens_tensor_1.to('cuda')
tokens_tensor_2 = tokens_tensor_2.to('cuda')
model.to('cuda')
with torch.no_grad():
# Predict all tokens
predictions_1, mems_1 = model(tokens_tensor_1)
# We can re-use the memory cells in a subsequent call to attend a longer context
predictions_2, mems_2 = model(tokens_tensor_2, mems=mems_1)
```
now when i print the shape of predictions_1 i get `torch.Size([1, 5, 267735])`
this is in relation with #477, and also one final thing, is there any way i could fasten the model loading and inference time in eval mode, i just need to use this as a reward in an RL setup, but it takes a tremendous amount of time to evaluate sentence, is there any way i could fasten the process? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/617/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/617/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/616 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/616/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/616/comments | https://api.github.com/repos/huggingface/transformers/issues/616/events | https://github.com/huggingface/transformers/issues/616 | 445,354,741 | MDU6SXNzdWU0NDUzNTQ3NDE= | 616 | TransfoXLModel and TransforXLLMModel have the same example | {
"login": "Shashi456",
"id": 18056781,
"node_id": "MDQ6VXNlcjE4MDU2Nzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/18056781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shashi456",
"html_url": "https://github.com/Shashi456",
"followers_url": "https://api.github.com/users/Shashi456/followers",
"following_url": "https://api.github.com/users/Shashi456/following{/other_user}",
"gists_url": "https://api.github.com/users/Shashi456/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shashi456/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shashi456/subscriptions",
"organizations_url": "https://api.github.com/users/Shashi456/orgs",
"repos_url": "https://api.github.com/users/Shashi456/repos",
"events_url": "https://api.github.com/users/Shashi456/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shashi456/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,563 | 1,563 | NONE | null | can someone help me understand how the outputs would wary and if someone could give an example for the latter? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/616/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/615 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/615/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/615/comments | https://api.github.com/repos/huggingface/transformers/issues/615/events | https://github.com/huggingface/transformers/issues/615 | 445,325,779 | MDU6SXNzdWU0NDUzMjU3Nzk= | 615 | Couldn't import '''BertPreTrainedModel''' | {
"login": "chenbingxiayu",
"id": 23647595,
"node_id": "MDQ6VXNlcjIzNjQ3NTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/23647595?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenbingxiayu",
"html_url": "https://github.com/chenbingxiayu",
"followers_url": "https://api.github.com/users/chenbingxiayu/followers",
"following_url": "https://api.github.com/users/chenbingxiayu/following{/other_user}",
"gists_url": "https://api.github.com/users/chenbingxiayu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenbingxiayu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenbingxiayu/subscriptions",
"organizations_url": "https://api.github.com/users/chenbingxiayu/orgs",
"repos_url": "https://api.github.com/users/chenbingxiayu/repos",
"events_url": "https://api.github.com/users/chenbingxiayu/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenbingxiayu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I guess you have to clone the repository. You just need to add all the classes you want to import in this line:\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/pytorch_pretrained_bert/__init__.py#L12\r\nThen you can install from source:\r\n`pip install --editable .`\r\n\r\nOf course since you download all the code, you can directly add your own class in `/pytorch_pretrained_bert/modeling_openai.py` then install from source.",
"So sorry that I am not familiar with the lib installation in python. \r\nThanks for your suggestions. I know how to solve it.\r\n\r\nOn 2019-05-20 19:32, eveliao wrote:\r\n> I guess you have to clone the repository. You just need to add all the\r\n> classes you want to import in this line:\r\n> https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/pytorch_pretrained_bert/__init__.py#L12\r\n> [1]\r\n> Then you can install from source:\r\n> pip install --editable .\r\n> \r\n> Of course since you download all the code, you can directly add your\r\n> own class in /pytorch_pretrained_bert/modeling_openai.py then install\r\n> from source.\r\n> \r\n> --\r\n> You are receiving this because you authored the thread.\r\n> Reply to this email directly, view it on GitHub [2], or mute the\r\n> thread [3]. [ { \"@context\": \"http://schema.org\", \"@type\":\r\n> \"EmailMessage\", \"potentialAction\": { \"@type\": \"ViewAction\", \"target\":\r\n> \"https://github.com/huggingface/pytorch-pretrained-BERT/issues/615?email_source=notifications\\u0026email_token=AFUNK24OFCY7KRHX6OVL4PTPWKD3BA5CNFSM4HNTFW5KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVYQ2OA#issuecomment-493948216\",\r\n> \"url\":\r\n> \"https://github.com/huggingface/pytorch-pretrained-BERT/issues/615?email_source=notifications\\u0026email_token=AFUNK24OFCY7KRHX6OVL4PTPWKD3BA5CNFSM4HNTFW5KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVYQ2OA#issuecomment-493948216\",\r\n> \"name\": \"View Issue\" }, \"description\": \"View this Issue on GitHub\",\r\n> \"publisher\": { \"@type\": \"Organization\", \"name\": \"GitHub\", \"url\":\r\n> \"https://github.com\" } } ]\r\n> \r\n> Links:\r\n> ------\r\n> [1]\r\n> https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/pytorch_pretrained_bert/__init__.py#L12\r\n> [2]\r\n> https://github.com/huggingface/pytorch-pretrained-BERT/issues/615?email_source=notifications&email_token=AFUNK24OFCY7KRHX6OVL4PTPWKD3BA5CNFSM4HNTFW5KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVYQ2OA#issuecomment-493948216\r\n> [3]\r\n> https://github.com/notifications/unsubscribe-auth/AFUNK24OKPGO4TJTYJ7SPZ3PWKD3BANCNFSM4HNTFW5A\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,558 | 1,564 | 1,564 | NONE | null | I installed this lib by '''pip install pytorch-pretrained-bert''', and there are no problems when run the examples. However, when I import '''BertPreTrainedModel''', I use '''from pytorch_pretrained_bert import BertPreTrainedModel''', error occurs.
I want to write a new class like '''BertForSequenceClassification''', and in order to use the facility of constructing the instance of the class by "'from_pretrained('bert-base-uncased')''' . My new class need to inherit the class '''BertPreTrainedModel'''.
In another way, we need the config file to get the instance of the class like '''BertForSequenceClassification''', however, I couldn't find the file 'bert_config.json'.
Are there other ways to write a new class like '''BertForSequenceClassification'''?
Thanks all of you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/615/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/614 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/614/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/614/comments | https://api.github.com/repos/huggingface/transformers/issues/614/events | https://github.com/huggingface/transformers/pull/614 | 444,547,131 | MDExOlB1bGxSZXF1ZXN0Mjc5MTg1MzA4 | 614 | global grad norm clipping (#581) | {
"login": "lukovnikov",
"id": 1732910,
"node_id": "MDQ6VXNlcjE3MzI5MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1732910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukovnikov",
"html_url": "https://github.com/lukovnikov",
"followers_url": "https://api.github.com/users/lukovnikov/followers",
"following_url": "https://api.github.com/users/lukovnikov/following{/other_user}",
"gists_url": "https://api.github.com/users/lukovnikov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukovnikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukovnikov/subscriptions",
"organizations_url": "https://api.github.com/users/lukovnikov/orgs",
"repos_url": "https://api.github.com/users/lukovnikov/repos",
"events_url": "https://api.github.com/users/lukovnikov/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukovnikov/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,557 | 1,563 | 1,563 | CONTRIBUTOR | null | (see #581 )
- norm-based gradient clipping was being done per param group
- when more than one param group, this is different from global norm grad clipping | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/614/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/614",
"html_url": "https://github.com/huggingface/transformers/pull/614",
"diff_url": "https://github.com/huggingface/transformers/pull/614.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/614.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/613 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/613/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/613/comments | https://api.github.com/repos/huggingface/transformers/issues/613/events | https://github.com/huggingface/transformers/issues/613 | 444,501,264 | MDU6SXNzdWU0NDQ1MDEyNjQ= | 613 | Learning from scratch not working | {
"login": "kilyyaoyao",
"id": 50670989,
"node_id": "MDQ6VXNlcjUwNjcwOTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/50670989?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kilyyaoyao",
"html_url": "https://github.com/kilyyaoyao",
"followers_url": "https://api.github.com/users/kilyyaoyao/followers",
"following_url": "https://api.github.com/users/kilyyaoyao/following{/other_user}",
"gists_url": "https://api.github.com/users/kilyyaoyao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kilyyaoyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kilyyaoyao/subscriptions",
"organizations_url": "https://api.github.com/users/kilyyaoyao/orgs",
"repos_url": "https://api.github.com/users/kilyyaoyao/repos",
"events_url": "https://api.github.com/users/kilyyaoyao/events{/privacy}",
"received_events_url": "https://api.github.com/users/kilyyaoyao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,557 | 1,563 | 1,563 | NONE | null | I'm using simple_lm_learning as it was, except I didn't use the model from pretrained, but a new BertForPreTraining model with the same config as bert-base, how come it's not learning anything and predicting the same token 1996 ("the") for every output? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/613/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/612 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/612/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/612/comments | https://api.github.com/repos/huggingface/transformers/issues/612/events | https://github.com/huggingface/transformers/issues/612 | 444,466,021 | MDU6SXNzdWU0NDQ0NjYwMjE= | 612 | How to use the fine tuned model for classification (CoLa) task? | {
"login": "GeetDsa",
"id": 13940397,
"node_id": "MDQ6VXNlcjEzOTQwMzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GeetDsa",
"html_url": "https://github.com/GeetDsa",
"followers_url": "https://api.github.com/users/GeetDsa/followers",
"following_url": "https://api.github.com/users/GeetDsa/following{/other_user}",
"gists_url": "https://api.github.com/users/GeetDsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GeetDsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GeetDsa/subscriptions",
"organizations_url": "https://api.github.com/users/GeetDsa/orgs",
"repos_url": "https://api.github.com/users/GeetDsa/repos",
"events_url": "https://api.github.com/users/GeetDsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/GeetDsa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Can you please confirm that the \"/examples/run_classifier.py\" file is indeed an example for simple sentence classification?\r\nIt looks like the code here uses the \"BertForSequenceClassification\" model where the tf model uses the \"BertModel\" (line 577 here https://github.com/google-research/bert/blob/master/run_classifier.py) - why is it different?",
"> How to use the fine-tuned model for classification (CoLa) task?\r\n> \r\n> I do not see the argument `--do_predict`, in `/examples/run_classifier.py`.\r\n> \r\n> However, `--do_predict` exists in the original implementation of the Bert.\r\n> \r\n> The fine-tuned model is getting saving in the BERT_OUTPUT_DIR as `pytorch_model.bin`, but is there a simple way to reuse it through the command line?\r\n\r\nI got an solution of QNLI task in GLUE. You can add an arg-parser (--do_predict) and [these lines](https://github.com/weidafeng/NLU2019/blob/master/model/run_classifier.py#L743:#L792), and run this command: \r\n```bash\r\npython ./model/run_classifier.py \\\r\n\t--task_name QNLI \\\r\n\t--do_predict \\\r\n\t--do_lower_case \\\r\n\t--data_dir ./glue_data/QNLI \\\r\n\t--bert_model bert-base-uncased \\\r\n\t--output_dir ./glue_data/QNLI/results **# path to your trained model**\r\n```\r\nYou will get the QNLI.tsv file as you want. Hope this works for you.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,557 | 1,565 | 1,565 | CONTRIBUTOR | null | How to use the fine-tuned model for classification (CoLa) task?
I do not see the argument `--do_predict`, in `/examples/run_classifier.py`.
However, `--do_predict` exists in the original implementation of the Bert.
The fine-tuned model is getting saving in the BERT_OUTPUT_DIR as `pytorch_model.bin`, but is there a simple way to reuse it through the command line? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/612/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/612/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/611 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/611/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/611/comments | https://api.github.com/repos/huggingface/transformers/issues/611/events | https://github.com/huggingface/transformers/issues/611 | 444,413,399 | MDU6SXNzdWU0NDQ0MTMzOTk= | 611 | extract_features | {
"login": "zhangatao",
"id": 28706321,
"node_id": "MDQ6VXNlcjI4NzA2MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/28706321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangatao",
"html_url": "https://github.com/zhangatao",
"followers_url": "https://api.github.com/users/zhangatao/followers",
"following_url": "https://api.github.com/users/zhangatao/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangatao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangatao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangatao/subscriptions",
"organizations_url": "https://api.github.com/users/zhangatao/orgs",
"repos_url": "https://api.github.com/users/zhangatao/repos",
"events_url": "https://api.github.com/users/zhangatao/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangatao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Were you able to fix this issue? If yes, can you please share your solution?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,557 | 1,564 | 1,564 | NONE | null | Traceback (most recent call last):
File "extract_features.py", line 297, in <module>
main()
File "extract_features.py", line 230, in main
tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case)
File "/home/py36/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 197, in from_pretrained
tokenizer = cls(resolved_vocab_file, *inputs, **kwargs)
File "/home/py36/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 97, in __init__
self.vocab = load_vocab(vocab_file)
File "/home/py36/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 56, in load_vocab
token = reader.readline()
File "/home/py36/lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
Hello, the pre-trained model I downloaded from https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz, what caused the above error? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/611/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/610 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/610/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/610/comments | https://api.github.com/repos/huggingface/transformers/issues/610/events | https://github.com/huggingface/transformers/issues/610 | 444,315,994 | MDU6SXNzdWU0NDQzMTU5OTQ= | 610 | t_total | {
"login": "zhangatao",
"id": 28706321,
"node_id": "MDQ6VXNlcjI4NzA2MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/28706321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangatao",
"html_url": "https://github.com/zhangatao",
"followers_url": "https://api.github.com/users/zhangatao/followers",
"following_url": "https://api.github.com/users/zhangatao/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangatao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangatao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangatao/subscriptions",
"organizations_url": "https://api.github.com/users/zhangatao/orgs",
"repos_url": "https://api.github.com/users/zhangatao/repos",
"events_url": "https://api.github.com/users/zhangatao/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangatao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"\r\nI found the reason. When the data is relatively small, this happens. After I added the data, it is normal now."
] | 1,557 | 1,557 | 1,557 | NONE | null | Traceback (most recent call last):
File "finetune_on_pregenerated.py", line 333, in <module>
main()
File "finetune_on_pregenerated.py", line 321, in main
optimizer.step()
File "/home/py36/lib/python3.6/site-packages/pytorch_pretrained_bert/optimization.py", line 290, in step
lr_scheduled *= group['schedule'].get_lr(state['step'])
File "/home/py36/lib/python3.6/site-packages/pytorch_pretrained_bert/optimization.py", line 61, in get_lr
progress = float(step) / self.t_total
ZeroDivisionError: float division by zero
Excuse me, what is the cause of this situation, why is t_total equal to 0? ? ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/610/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.