url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/609 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/609/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/609/comments | https://api.github.com/repos/huggingface/transformers/issues/609/events | https://github.com/huggingface/transformers/issues/609 | 444,313,756 | MDU6SXNzdWU0NDQzMTM3NTY= | 609 | t_total | {
"login": "zhangatao",
"id": 28706321,
"node_id": "MDQ6VXNlcjI4NzA2MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/28706321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangatao",
"html_url": "https://github.com/zhangatao",
"followers_url": "https://api.github.com/users/zhangatao/followers",
"following_url": "https://api.github.com/users/zhangatao/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangatao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangatao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangatao/subscriptions",
"organizations_url": "https://api.github.com/users/zhangatao/orgs",
"repos_url": "https://api.github.com/users/zhangatao/repos",
"events_url": "https://api.github.com/users/zhangatao/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangatao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"ok"
] | 1,557 | 1,557 | 1,557 | NONE | null | 
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/609/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/608 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/608/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/608/comments | https://api.github.com/repos/huggingface/transformers/issues/608/events | https://github.com/huggingface/transformers/issues/608 | 443,964,730 | MDU6SXNzdWU0NDM5NjQ3MzA= | 608 | when using multiple GPUs, `loss.mean()` may have subtle bias | {
"login": "Lvzhh",
"id": 15075627,
"node_id": "MDQ6VXNlcjE1MDc1NjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/15075627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lvzhh",
"html_url": "https://github.com/Lvzhh",
"followers_url": "https://api.github.com/users/Lvzhh/followers",
"following_url": "https://api.github.com/users/Lvzhh/following{/other_user}",
"gists_url": "https://api.github.com/users/Lvzhh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lvzhh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lvzhh/subscriptions",
"organizations_url": "https://api.github.com/users/Lvzhh/orgs",
"repos_url": "https://api.github.com/users/Lvzhh/repos",
"events_url": "https://api.github.com/users/Lvzhh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lvzhh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You could fix it with something like this\r\n```\r\nbatch_size = torch.tensor(data.shape[1]).to(device)\r\ndist.all_reduce(batch_size, op=dist.ReduceOp.SUM)\r\ndist.all_reduce(loss, op=dist.ReduceOp.SUM)\r\nmean_loss = loss/batch_size\r\n```",
"Thanks for your solution. I don't think it could fix it, because the `loss` returned by each GPU is already averaged over its `batch_size`.",
"@Lvzhh the examples (like `run_squad.py`) only work in two settings for multi-gpu:\r\n\r\n- using `DataParallel` (not distributed, triggered when you simply run the script on a multi-gpu server): in this case the batch size should be a multiple of the number of gpu or an error is thrown. This is the case in which this `loss = loss.mean()` will be triggered. No risk of differing batch sizes here.\r\n- using `DistributedDataParallel` with a *single* GPU assigned to each process (triggered when you run the example using PyTorch `distributed.launch` script for instance): this only works when 1 GPU is assigned to each process (see [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/examples/run_squad.py#L853-L857)). In this case `loss = loss.mean()` is not triggered. There is no averaging of the loss over the GPU. What happens here is that it's the gradients which are averaged (read PyTorch [DDP doc](https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel) for instance). The batch size should be the same for each script so no problem here also.\r\n\r\nSo I don't think the problem you are describing will show up in practice.\r\n",
"@thomwolf I'm using the first multi-gpu setting `DataParallel`. And I tried to find some code in `run_squad.py` to ensure that the `batch_size` be a multiple of the number of gpu, but I didn't find.\r\n\r\nAnd I just run this command [here](https://github.com/huggingface/pytorch-pretrained-BERT#squad) with `--train_batch_size 13` using 2 GPUs, and print the size of `start_logits` and its device, the first dimension (`batch_size`) on each GPUs are actually different (7 vs 6).\r\n\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/pytorch_pretrained_bert/modeling.py#L1204-L1206",
"Oh you are right, this error is not there anymore.\r\nWe should probably just check for this again, I'll fix the examples.\r\n\r\n`DataParallel` is in the process of being replaced by `DistributedDataParallel` which is pretty much always faster due to releasing the GiL so maybe you should try DDP.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,557 | 1,563 | 1,563 | NONE | null | The problem is that, when the input is distributed to multiple GPUs, the input on each GPU may have different `batch_size`.
For example, if you have 2 GPUs and the total batch_size is 13, then the `batch_size` for each GPU will be 7 and 6 respectively, `loss.mean()` will not give the exact loss. Although it may have little influence on the training of the model, it is not the exact result.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/examples/run_squad.py#L1006-L1007 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/608/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/607 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/607/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/607/comments | https://api.github.com/repos/huggingface/transformers/issues/607/events | https://github.com/huggingface/transformers/issues/607 | 443,153,390 | MDU6SXNzdWU0NDMxNTMzOTA= | 607 | How to check the vocab size of bert large and bert small? | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,557 | 1,563 | 1,563 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/607/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/606 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/606/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/606/comments | https://api.github.com/repos/huggingface/transformers/issues/606/events | https://github.com/huggingface/transformers/issues/606 | 443,153,286 | MDU6SXNzdWU0NDMxNTMyODY= | 606 | How can we import cased bert model? | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,557 | 1,563 | 1,563 | NONE | null | I notice that the author only shows us how to use uncased model, could anyone show me how to import cased model in both BertModel and BertTokenClassifier model? Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/606/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/605 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/605/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/605/comments | https://api.github.com/repos/huggingface/transformers/issues/605/events | https://github.com/huggingface/transformers/issues/605 | 443,153,143 | MDU6SXNzdWU0NDMxNTMxNDM= | 605 | why use self.apply(self.init_bert_weights) in inhiritance class? | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,557 | 1,563 | 1,563 | NONE | null | self.apply(self.init_bert_weights) is already used in BertModel class, why do we still need to use self.apply(self.init_bert_weights) in all inhiritance model such as BertTokenClassificaiton model? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/605/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/605/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/604 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/604/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/604/comments | https://api.github.com/repos/huggingface/transformers/issues/604/events | https://github.com/huggingface/transformers/pull/604 | 443,042,368 | MDExOlB1bGxSZXF1ZXN0Mjc4MDEzMTAx | 604 | Fixing issue "Training beyond specified 't_total' steps with schedule 'warmup_linear'" reported in #556 | {
"login": "samuelbroscheit",
"id": 22645035,
"node_id": "MDQ6VXNlcjIyNjQ1MDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/22645035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samuelbroscheit",
"html_url": "https://github.com/samuelbroscheit",
"followers_url": "https://api.github.com/users/samuelbroscheit/followers",
"following_url": "https://api.github.com/users/samuelbroscheit/following{/other_user}",
"gists_url": "https://api.github.com/users/samuelbroscheit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samuelbroscheit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samuelbroscheit/subscriptions",
"organizations_url": "https://api.github.com/users/samuelbroscheit/orgs",
"repos_url": "https://api.github.com/users/samuelbroscheit/repos",
"events_url": "https://api.github.com/users/samuelbroscheit/events{/privacy}",
"received_events_url": "https://api.github.com/users/samuelbroscheit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks good.",
"Ok merging thanks, sorry for the delay!"
] | 1,557 | 1,560 | 1,560 | CONTRIBUTOR | null | Fixing the issues reported in https://github.com/huggingface/pytorch-pretrained-BERT/issues/556
Reason for issue was that num_optimzation_steps was computed from example size, which is different from actual size of dataloader when an example is chunked into multiple instances.
Solution in this pull request is to compute num_optimization_steps directly from len(data_loader). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/604/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/604/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/604",
"html_url": "https://github.com/huggingface/transformers/pull/604",
"diff_url": "https://github.com/huggingface/transformers/pull/604.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/604.patch",
"merged_at": 1560523765000
} |
https://api.github.com/repos/huggingface/transformers/issues/603 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/603/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/603/comments | https://api.github.com/repos/huggingface/transformers/issues/603/events | https://github.com/huggingface/transformers/issues/603 | 442,998,014 | MDU6SXNzdWU0NDI5OTgwMTQ= | 603 | Using BERT as feature extractor | {
"login": "shivanshpatel35",
"id": 24505652,
"node_id": "MDQ6VXNlcjI0NTA1NjUy",
"avatar_url": "https://avatars.githubusercontent.com/u/24505652?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shivanshpatel35",
"html_url": "https://github.com/shivanshpatel35",
"followers_url": "https://api.github.com/users/shivanshpatel35/followers",
"following_url": "https://api.github.com/users/shivanshpatel35/following{/other_user}",
"gists_url": "https://api.github.com/users/shivanshpatel35/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shivanshpatel35/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shivanshpatel35/subscriptions",
"organizations_url": "https://api.github.com/users/shivanshpatel35/orgs",
"repos_url": "https://api.github.com/users/shivanshpatel35/repos",
"events_url": "https://api.github.com/users/shivanshpatel35/events{/privacy}",
"received_events_url": "https://api.github.com/users/shivanshpatel35/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"i have same question ",
"How best to fine-tune or pool BERT is an open question in bertology :p \r\n\r\n[\"How to Fine-Tune BERT for Text Classification?\"](https://arxiv.org/pdf/1905.05583.pdf) has a comprehensive overview. Look at table 3 specifically which found that taking the max of the last 4 layers achieves the best performance (in their testing configuration with some fine tuning), this would be my guess. \r\n\r\nYou can also look at the approaches taken by [bert-as-a-service](https://github.com/hanxiao/bert-as-service#q-what-are-the-available-pooling-strategies) which are explained pretty well, in this repo they use MEAN as the default pooling strategy.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,557 | 1,568 | 1,568 | NONE | null | Following file extract_features.py, I use bert-large-uncased and it outputs 4 layer outputs for each token(word). Since I want to use it as feature extractor of entire sentence, which values should I use? Or is there any other processing we should do(like concatenate output for last token)? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/603/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/603/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/602 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/602/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/602/comments | https://api.github.com/repos/huggingface/transformers/issues/602/events | https://github.com/huggingface/transformers/issues/602 | 442,935,877 | MDU6SXNzdWU0NDI5MzU4Nzc= | 602 | Different GPT-2 outputs with mixed precision vs single precision | {
"login": "AdamDanielKing",
"id": 5590173,
"node_id": "MDQ6VXNlcjU1OTAxNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5590173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdamDanielKing",
"html_url": "https://github.com/AdamDanielKing",
"followers_url": "https://api.github.com/users/AdamDanielKing/followers",
"following_url": "https://api.github.com/users/AdamDanielKing/following{/other_user}",
"gists_url": "https://api.github.com/users/AdamDanielKing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdamDanielKing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdamDanielKing/subscriptions",
"organizations_url": "https://api.github.com/users/AdamDanielKing/orgs",
"repos_url": "https://api.github.com/users/AdamDanielKing/repos",
"events_url": "https://api.github.com/users/AdamDanielKing/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdamDanielKing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@AdamDanielKing did you manage to fix the divergence somehow? ",
"Was this finally an issue? It seems important and it was closed due to inactivity. @AdamDanielKing did you work it around?",
"@Damiox While sampling with mixed precision gives different results, they seem to still be of high quality. I've been using mixed precision on [talktotransformer.com](https://talktotransformer.com) for at least 6-7 months now and the quality has been excellent.",
"I am not using the \"past\" output at the moment, just doing topk=30. I ran into this issue as I am trying to improve my inferences times.\r\nI have apex installed, and using the gpt2-medium model I get in average per sentence an inference time of 11ms for ~50 tokens within a batch size of 30 sentences on Tesla T4. Bigger batches aren't increasing the throughput. I just tried turning on fp16 into my model via \".half()\" and it seems to be 3x faster. Is it possible? I am wondering whether that is fine or I need to do anything else (e.g. initializing apex, re-training my model with fp16). I feel I may be losing something. What do you think?",
"Talktotransformer.com just uses Apex to cast the model to fp16--no\nretraining. I use the opt level that casts the weights themselves as well\n(allows larger batch sizes). It seems to work well.\n\nIf you're getting 3x improvement from calling .half() instead of\ninitializing with Apex, that is strange and I can't imagine why. I've found\nthat this methods diverges more quickly from fp32 than using Apex so I\nhaven't tested much with it.\n\nOn a separate note, I'd consider trying cheaper GPUs. It may be\ncounter-intuitive, but I've found for example on Google that cheap P4 GPUs\ngive greater throughput _for their cost_ than a T4 even though they don't\nhave tensor cores. I think this is for the following reason: in generating\neach token after the first, only the values for a single position are being\ncomputed at each iteration, which is very little computation for the amount\nof memory being used. I think this results in a lot of time being spent on\nGPU-CPU round trips rather than actual work. Batch size becomes more\nimportant than GPU speed.\n\nOn Mon, Mar 9, 2020, 5:56 PM Damian Nardelli, <[email protected]>\nwrote:\n\n> I am not using the \"past\" output at the moment, just doing topk=30. I ran\n> into this issue as I am trying to improve my inferences times.\n> I have apex installed, and using the gpt2-medium model I get in average\n> per sentence an inference time of 11ms for ~50 tokens within a batch size\n> of 30 sentences on Tesla T4. Bigger batches aren't increasing the\n> throughput. I just tried adding turning fp16 into my model via \".half()\"\n> and it seems to be 3x faster. Is it possible? I am wondering whether that\n> is fine or I need to do anything else (e.g. initializing apex, re-training\n> my model with fp16). I feel I may be losing something. What do you think?\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/602?email_source=notifications&email_token=ABKUZHOUB74EQKNVRORWKB3RGVQX7A5CNFSM4HMG65OKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEOJG2FA#issuecomment-596798740>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABKUZHOS54LHVYM4OSGCR7LRGVQX7ANCNFSM4HMG65OA>\n> .the\n>\n",
"I haven't tried initializing apex explicitly in my code. I believe I just thought that was being done by `GPT2LMHeadModel` behind the scenes, but it doesn't look like... I had installed it because I saw the following warning when running my inference app: `Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex`. Does this mean that the gpt2-medium is compiled with apex support? I'm not certain about that.\r\n\r\nShould I try doing the amp.initialize() with the opt level in my inference app? Are you using O1? From https://nvidia.github.io/apex/amp.html#opt-levels-and-properties looks like I shouldn't be explicitly calling `half()`, so probably I should try initializing apex. What do you think?\r\n\r\nThanks in advance for all your help!",
"I had a question related to this: would the outputs in generation with GPT-2 change if the batch size changes?",
"Currently generation only allows `batch_size=1`"
] | 1,557 | 1,591 | 1,567 | NONE | null | When using GPT-2 with mixed precision, the generated text is different from that produced by running it normally. This is true for both conditional and unconditional generation, and for top_k=1 (deterministic) and top_k=40. Typically the mixed precision and single precision outputs agree for a number of tokens and then begin to disagree (sometimes early, sometimes late).
Using GPT-2 with mixed precision would be useful to take advantage of the tensor cores on V100 and T4 GPUs.
Testing by calling `model.half()` on GPT2LMHeadModel tends to start producing incorrect outputs early, while instead using Apex's AMP usually produces correct outputs for a little longer but still generally deviates. My tests were on the 117M model, with Apex installed.
It surprises me that the top_k=1 results often differ, sometimes very early in the sequence. They only take the largest logits, so this means the ranking of the logits is different.
I think the cause is compounding errors in the "past" tensor used by the attention function. Each time a new token is generated, its past has some error in it. When subsequent token generations then use those values (in higher attention layers), their own pasts have _more_ error. And so on, up through 16 layers for 117M or 24 for 345M. For cases where the top 2 logit values are almost the same, those 16 steps of error might be enough to change which one is larger and thereby change even the top_k=1 output. I haven't verified this idea yet.
I'm not sure if this necessarily means the outputs will be _qualitatively worse_, but that's a hard thing to measure. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/602/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/601 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/601/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/601/comments | https://api.github.com/repos/huggingface/transformers/issues/601/events | https://github.com/huggingface/transformers/issues/601 | 442,639,791 | MDU6SXNzdWU0NDI2Mzk3OTE= | 601 | How to reduce embedding size from 768? | {
"login": "kbulutozler",
"id": 34663649,
"node_id": "MDQ6VXNlcjM0NjYzNjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/34663649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kbulutozler",
"html_url": "https://github.com/kbulutozler",
"followers_url": "https://api.github.com/users/kbulutozler/followers",
"following_url": "https://api.github.com/users/kbulutozler/following{/other_user}",
"gists_url": "https://api.github.com/users/kbulutozler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kbulutozler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kbulutozler/subscriptions",
"organizations_url": "https://api.github.com/users/kbulutozler/orgs",
"repos_url": "https://api.github.com/users/kbulutozler/repos",
"events_url": "https://api.github.com/users/kbulutozler/events{/privacy}",
"received_events_url": "https://api.github.com/users/kbulutozler/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You need to retrain it with your embeddings replacing `BertModel.embeddings.word_embeddings` and the model size being your embeddings size.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,557 | 1,568 | 1,568 | NONE | null | I am using simple_lm_finetuning.py to fine tune BERT. However I want to get smaller embeddings. Where can I change this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/601/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/600 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/600/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/600/comments | https://api.github.com/repos/huggingface/transformers/issues/600/events | https://github.com/huggingface/transformers/issues/600 | 442,633,381 | MDU6SXNzdWU0NDI2MzMzODE= | 600 | Fine tuning time did not change much after freezing layers | {
"login": "kbulutozler",
"id": 34663649,
"node_id": "MDQ6VXNlcjM0NjYzNjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/34663649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kbulutozler",
"html_url": "https://github.com/kbulutozler",
"followers_url": "https://api.github.com/users/kbulutozler/followers",
"following_url": "https://api.github.com/users/kbulutozler/following{/other_user}",
"gists_url": "https://api.github.com/users/kbulutozler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kbulutozler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kbulutozler/subscriptions",
"organizations_url": "https://api.github.com/users/kbulutozler/orgs",
"repos_url": "https://api.github.com/users/kbulutozler/repos",
"events_url": "https://api.github.com/users/kbulutozler/events{/privacy}",
"received_events_url": "https://api.github.com/users/kbulutozler/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,557 | 1,563 | 1,563 | NONE | null | Hi,
I am using simple_lm_finetuning.py to fine tune the model. I wanted to freeze all parameters from the very beginning to the beginning of the 12th transformer layer, I looked into parameters by name and used a counter, took the value of the counter corresponding to the beginning of the 12th layer, and used this value to freeze all layers before 12th layer. You can understand better from this piece of code I added to simple_lm_finetuning.py.
```
ctr = 0
for name, param in model.named_parameters():
ctr += 1
#print(ctr)
#print(name)
if(ctr < 183): #183 is where 12th transformer layer starts
param.requires_grad = False
```
With Nvidia K80 and original simple_lm_finetuning.py, one epoch required about 37-38 hours. After adding this piece of code, it required about 28 hours. Since I have frozen all parameters from the beginning to the 12th layer, I was expecting more reduced time. Where am I wrong?
I am open to other suggestions of fine tuning methods that requires less computational time. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/600/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/599 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/599/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/599/comments | https://api.github.com/repos/huggingface/transformers/issues/599/events | https://github.com/huggingface/transformers/issues/599 | 442,603,643 | MDU6SXNzdWU0NDI2MDM2NDM= | 599 | BERT tokenizer - set special tokens | {
"login": "adigoryl",
"id": 31667817,
"node_id": "MDQ6VXNlcjMxNjY3ODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/31667817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adigoryl",
"html_url": "https://github.com/adigoryl",
"followers_url": "https://api.github.com/users/adigoryl/followers",
"following_url": "https://api.github.com/users/adigoryl/following{/other_user}",
"gists_url": "https://api.github.com/users/adigoryl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adigoryl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adigoryl/subscriptions",
"organizations_url": "https://api.github.com/users/adigoryl/orgs",
"repos_url": "https://api.github.com/users/adigoryl/repos",
"events_url": "https://api.github.com/users/adigoryl/events{/privacy}",
"received_events_url": "https://api.github.com/users/adigoryl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi Adrian, BERT already has a few unused tokens that can be used similarly to the `special_tokens` of GPT/GPT-2.\r\nFor more details see https://github.com/google-research/bert/issues/9#issuecomment-434796704 and issue #405 for instance.",
"In case we use an unused special token from the vocabulary, is it enough to finetune a classification task or do we need to train an embedding from scratch? Did anyone already do this?\r\n\r\nTwo different and somehow related questions I had when looking into the implementation:\r\n\r\n1) The Bert paper mentions a (learned) positional embedding. How is this implemented here? examples/extract_features/convert_examples_to_features() defines tokens (representation), input_type_ids (the difference between the first and second sequence) and an input_mask (distinguishing padding/real tokens) but no positional embedding. Is this done internally?\r\n\r\n2) Can I use a special token as input_type_ids for Bert? In the classification example, only values of [0,1] are possible and I'm wondering what would happen if I would choose a special token instead? Is this possible with a pretrained embedding or do i need to retrain the whole embedding as a consequence?\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,557 | 1,564 | 1,564 | NONE | null | Hi,
I was wondering whether the team could expand BERT so that fine-tuning with newly defined special tokens would be possible - just like the GPT allows.
@thomwolf Could you share your thought with me on that?
Regards,
Adrian. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/599/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/598 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/598/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/598/comments | https://api.github.com/repos/huggingface/transformers/issues/598/events | https://github.com/huggingface/transformers/pull/598 | 442,291,837 | MDExOlB1bGxSZXF1ZXN0Mjc3NDM2MTQz | 598 | Updating learning rate with special warm up in examples | {
"login": "burcturkoglu",
"id": 20150809,
"node_id": "MDQ6VXNlcjIwMTUwODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/20150809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/burcturkoglu",
"html_url": "https://github.com/burcturkoglu",
"followers_url": "https://api.github.com/users/burcturkoglu/followers",
"following_url": "https://api.github.com/users/burcturkoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/burcturkoglu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/burcturkoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/burcturkoglu/subscriptions",
"organizations_url": "https://api.github.com/users/burcturkoglu/orgs",
"repos_url": "https://api.github.com/users/burcturkoglu/repos",
"events_url": "https://api.github.com/users/burcturkoglu/events{/privacy}",
"received_events_url": "https://api.github.com/users/burcturkoglu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh great thanks Burc!"
] | 1,557 | 1,557 | 1,557 | CONTRIBUTOR | null | Updating examples by removing division to num_train_optimization_steps for new WarmupLinearSchedule.
Fixes #566 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/598/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/598",
"html_url": "https://github.com/huggingface/transformers/pull/598",
"diff_url": "https://github.com/huggingface/transformers/pull/598.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/598.patch",
"merged_at": 1557488893000
} |
https://api.github.com/repos/huggingface/transformers/issues/597 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/597/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/597/comments | https://api.github.com/repos/huggingface/transformers/issues/597/events | https://github.com/huggingface/transformers/pull/597 | 441,921,438 | MDExOlB1bGxSZXF1ZXN0Mjc3MTQ1ODk3 | 597 | GPT-2 (medium size model, special_tokens, fine-tuning, attention) + repo code coverage metric | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597?src=pr&el=h1) Report\n> :exclamation: No coverage uploaded for pull request base (`master@f9cde97`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).\n> The diff coverage is `81%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #597 +/- ##\n=========================================\n Coverage ? 67.04% \n=========================================\n Files ? 18 \n Lines ? 3835 \n Branches ? 0 \n=========================================\n Hits ? 2571 \n Misses ? 1264 \n Partials ? 0\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/tokenization\\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX29wZW5haS5weQ==) | `81.34% <0%> (ø)` | |\n| [pytorch\\_pretrained\\_bert/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `82.44% <20%> (ø)` | |\n| [pytorch\\_pretrained\\_bert/modeling\\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfb3BlbmFpLnB5) | `78.3% <68.51%> (ø)` | |\n| [pytorch\\_pretrained\\_bert/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfZ3B0Mi5weQ==) | `79.04% <73.11%> (ø)` | |\n| [pytorch\\_pretrained\\_bert/modeling.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmcucHk=) | `88.57% <98.09%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597?src=pr&el=footer). Last update [f9cde97...35e6baa](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/597?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi, is the script run_openai.py updating weights of all layers or just the last few? Thanks in advance!"
] | 1,557 | 1,566 | 1,560 | MEMBER | null | Superseded #560.
Improvements to GPT-2:
- add special tokens
- tested fine-tuning
- add medium size model
Improvements to GPT/GPT-2:
- option to extract attention weights.
Add code coverage | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/597/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/597",
"html_url": "https://github.com/huggingface/transformers/pull/597",
"diff_url": "https://github.com/huggingface/transformers/pull/597.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/597.patch",
"merged_at": 1560523652000
} |
https://api.github.com/repos/huggingface/transformers/issues/596 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/596/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/596/comments | https://api.github.com/repos/huggingface/transformers/issues/596/events | https://github.com/huggingface/transformers/issues/596 | 441,715,398 | MDU6SXNzdWU0NDE3MTUzOTg= | 596 | [Question] Cross-lingual sentence representations | {
"login": "shoegazerstella",
"id": 22822597,
"node_id": "MDQ6VXNlcjIyODIyNTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/22822597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shoegazerstella",
"html_url": "https://github.com/shoegazerstella",
"followers_url": "https://api.github.com/users/shoegazerstella/followers",
"following_url": "https://api.github.com/users/shoegazerstella/following{/other_user}",
"gists_url": "https://api.github.com/users/shoegazerstella/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shoegazerstella/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shoegazerstella/subscriptions",
"organizations_url": "https://api.github.com/users/shoegazerstella/orgs",
"repos_url": "https://api.github.com/users/shoegazerstella/repos",
"events_url": "https://api.github.com/users/shoegazerstella/events{/privacy}",
"received_events_url": "https://api.github.com/users/shoegazerstella/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @shoegazerstella well XLM is already pretty much as powerful as BERT and focused on cross-lingual sentence representations so I would go directly for it instead of BERT.",
"Thanks @thomwolf, \r\nAre you considering integrating something for cross-lingual representations in the `pytorch-pretrained-BERT` library in the near future?",
"Not in the short-term"
] | 1,557 | 1,557 | 1,557 | NONE | null | Hi,
Would it be possible to integrate also a BERT model for cross-lingual sentence representations?
Something like, for example, the `XNLI-15` model in [https://github.com/facebookresearch/XLM](https://github.com/facebookresearch/XLM).
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/596/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/595 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/595/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/595/comments | https://api.github.com/repos/huggingface/transformers/issues/595/events | https://github.com/huggingface/transformers/issues/595 | 441,198,290 | MDU6SXNzdWU0NDExOTgyOTA= | 595 | Unclear error message when unable to cache the model | {
"login": "czyzby",
"id": 11707612,
"node_id": "MDQ6VXNlcjExNzA3NjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/11707612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/czyzby",
"html_url": "https://github.com/czyzby",
"followers_url": "https://api.github.com/users/czyzby/followers",
"following_url": "https://api.github.com/users/czyzby/following{/other_user}",
"gists_url": "https://api.github.com/users/czyzby/gists{/gist_id}",
"starred_url": "https://api.github.com/users/czyzby/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/czyzby/subscriptions",
"organizations_url": "https://api.github.com/users/czyzby/orgs",
"repos_url": "https://api.github.com/users/czyzby/repos",
"events_url": "https://api.github.com/users/czyzby/events{/privacy}",
"received_events_url": "https://api.github.com/users/czyzby/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, this error message hides several potential sources, I'll see if I can disentangle the error messages :) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This is still an issue. I suggest to improve the message and raise an exception if unable to load any of the models, instead of silently returning `None`.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Still an issue, as far as I know.",
"@czyzby Hello,How do you solve the cache problem? I have the same problem but can't fix it",
"@twothousand To be honest, I don't remember, but I think the directory did not exist or lacked write permission. Are you sure the cache is causing the problem? If you changed the cache directory, make sure the folder exists and has appropriate permissions - otherwise I'd debug model loading and see which exception is being ignored.\r\n\r\n@thomwolf It seems that it's still an issue, will you look into proper error handling?",
"Yes, it should have been improved on the latest release 2.1.1 (with the merge of #1480).\r\n\r\nMaybe open a new issue with clear details of the current issue?"
] | 1,557 | 1,571 | 1,571 | NONE | null | I encountered the following error:
```
[2019-05-07 11:06:51,904: ERROR/ForkPoolWorker-1] Model name 'bert-base-uncased'
was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased,
bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased,
bert-base-chinese).
We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz'
was a path or url but couldn't find any file associated to this path or url.
```
After some debugging, I found that the root cause of the issue was the fact that the application is unable to cache the model in the home directory. It was a simple I/O error rather than an issue with the model name or file downloading, as the message suggests. I think it would be worth it to handle this case with an appropriate message, and what's more important - throwing an exception.
In my case, I did get the error logs, but the application initiated "successfully" - with the tokenizer and model set to `None`. If the library is not able to load the model for any reason, I'd expect it to throw an exception rather than just (almost) silently return a `None`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/595/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/595/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/594 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/594/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/594/comments | https://api.github.com/repos/huggingface/transformers/issues/594/events | https://github.com/huggingface/transformers/issues/594 | 441,149,570 | MDU6SXNzdWU0NDExNDk1NzA= | 594 | size mismatch for lm_head.decoder.weight | {
"login": "Wingie",
"id": 140260,
"node_id": "MDQ6VXNlcjE0MDI2MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/140260?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wingie",
"html_url": "https://github.com/Wingie",
"followers_url": "https://api.github.com/users/Wingie/followers",
"following_url": "https://api.github.com/users/Wingie/following{/other_user}",
"gists_url": "https://api.github.com/users/Wingie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wingie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wingie/subscriptions",
"organizations_url": "https://api.github.com/users/Wingie/orgs",
"repos_url": "https://api.github.com/users/Wingie/repos",
"events_url": "https://api.github.com/users/Wingie/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wingie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"```Traceback (most recent call last):\r\n File \"question-generation/interact.py\", line 238, in <module>\r\n run()\r\n File \"question-generation/interact.py\", line 144, in run\r\n model = GPT2LMHeadModel.from_pretrained(args.model_checkpoint)\r\n File \"/usr/local/lib/python3.6/dist-packages/pytorch_pretrained_bert/modeling_gpt2.py\", line 475, in from_pretrained\r\n \"Error(s) in loading state_dict for {}:\\n\\t{}\".format(model.__class__.__name__, \"\\n\\t\".join(error_msgs))\r\nRuntimeError: Error(s) in loading state_dict for GPT2LMHeadModel:\r\n\tsize mismatch for transformer.wte.weight: copying a param with shape torch.Size([50265, 768]) from checkpoint, the shape in current model is torch.Size([50257, 768]).\r\n\tsize mismatch for lm_head.decoder.weight: copying a param with shape torch.Size([50265, 768]) from checkpoint, the shape in current model is torch.Size([50257, 768]).\r\n```\r\nI am having a similar issue? Any idea how to solve this? @Wingie ",
"+1",
"+1",
"+1\r\n",
"> ```\r\n> File \"question-generation/interact.py\", line 238, in <module>\r\n> run()\r\n> File \"question-generation/interact.py\", line 144, in run\r\n> model = GPT2LMHeadModel.from_pretrained(args.model_checkpoint)\r\n> File \"/usr/local/lib/python3.6/dist-packages/pytorch_pretrained_bert/modeling_gpt2.py\", line 475, in from_pretrained\r\n> \"Error(s) in loading state_dict for {}:\\n\\t{}\".format(model.__class__.__name__, \"\\n\\t\".join(error_msgs))\r\n> RuntimeError: Error(s) in loading state_dict for GPT2LMHeadModel:\r\n> \tsize mismatch for transformer.wte.weight: copying a param with shape torch.Size([50265, 768]) from checkpoint, the shape in current model is torch.Size([50257, 768]).\r\n> \tsize mismatch for lm_head.decoder.weight: copying a param with shape torch.Size([50265, 768]) from checkpoint, the shape in current model is torch.Size([50257, 768]).\r\n> ```\r\n> \r\n> I am having a similar issue? Any idea how to solve this? @Wingie\r\n\r\nWere you able to fix this? I am having this issue, with the same sizes as well.",
"same here!\r\n"
] | 1,557 | 1,674 | 1,563 | NONE | null | Hi i'm new to this,
first i started a finetune job
```
export ROC_STORIES_DIR=roc/
python run_openai_gpt.py \
--model_name openai-gpt \
--do_train \
--do_eval \
--train_dataset $ROC_STORIES_DIR/cloze_test_val__spring2016\ -\ cloze_test_ALL_val.csv \
--eval_dataset $ROC_STORIES_DIR/cloze_test_test__spring2016\ -\ cloze_test_ALL_test.csv \
--output_dir ../roc_gpt \
--train_batch_size 16 \
```
i have the following files in the output folder
```
config.json eval_results.txt merges.txt pytorch_model.bin special_tokens.txt vocab.json
```
however when i run
```
python run_gpt2.py --model_name_or_path=../roc_gpt/
```
i get this error
```
(env) [wwilson@b-user-wwilson-m ~/persistent-disk/notebooks/pytorch-pretrained-BERT/examples]$ python run_gpt2.py --model_name_or_path=../roc_gpt/
Namespace(batch_size=-1, length=-1, model_name_or_path='../roc_gpt/', nsamples=1, seed=0, temperature=1.0, top_k=0, unconditional=False)
05/07/2019 09:58:24 - INFO - pytorch_pretrained_bert.tokenization_gpt2 - loading special tokens file ../roc_gpt/special_tokens.txt
05/07/2019 09:58:24 - INFO - pytorch_pretrained_bert.tokenization_gpt2 - loading vocabulary file ../roc_gpt/vocab.json
05/07/2019 09:58:24 - INFO - pytorch_pretrained_bert.tokenization_gpt2 - loading merges file ../roc_gpt/merges.txt
05/07/2019 09:58:24 - INFO - pytorch_pretrained_bert.tokenization_gpt2 - Special tokens {'_start_': 40478, '_delimiter_': 40479, '_classify_': 40480}
05/07/2019 09:58:24 - INFO - pytorch_pretrained_bert.modeling_gpt2 - loading weights file ../roc_gpt/pytorch_model.bin
05/07/2019 09:58:24 - INFO - pytorch_pretrained_bert.modeling_gpt2 - loading configuration file ../roc_gpt/config.json
05/07/2019 09:58:24 - INFO - pytorch_pretrained_bert.modeling_gpt2 - Model config {
"afn": "gelu",
"attn_pdrop": 0.1,
"embd_pdrop": 0.1,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"n_ctx": 512,
"n_embd": 768,
"n_head": 12,
"n_layer": 12,
"n_positions": 512,
"n_special": 3,
"resid_pdrop": 0.1,
"vocab_size": 40478
}
05/07/2019 09:58:26 - INFO - pytorch_pretrained_bert.modeling_gpt2 - Weights of GPT2LMHeadModel not initialized from pretrained model: ['transformer.wte.weight', 'transformer.wpe.weight', 'transformer.ln_f.weight', 'transformer.ln_f.bias']
05/07/2019 09:58:26 - INFO - pytorch_pretrained_bert.modeling_gpt2 - Weights from pretrained model not used in GPT2LMHeadModel: ['multiple_choice_head.linear.weight', 'multiple_choice_head.linear.bias', 'transformer.tokens_embed.weight', 'transformer.positions_embed.weight']
Traceback (most recent call last):
File "run_gpt2.py", line 129, in <module>
run_model()
File "run_gpt2.py", line 77, in run_model
model = GPT2LMHeadModel.from_pretrained(args.model_name_or_path)
File "/mnt/notebooks/notebooks/pytorch-pretrained-BERT/env/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling_gpt2.py", line 475, in from_pretrained
"Error(s) in loading state_dict for {}:\n\t{}".format(model.__class__.__name__, "\n\t".join(error_msgs))
RuntimeError: Error(s) in loading state_dict for GPT2LMHeadModel:
size mismatch for lm_head.decoder.weight: copying a param with shape torch.Size([40481, 768]) from checkpoint, the shape in current model is torch.Size([40478, 768]).
```
I'm guessing the first script is for gpt and the second one is for gpt2?
should i adjust the 2nd script to load the same classes as the trainer one to use the fine tuned gpt2 model?
thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/594/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/593 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/593/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/593/comments | https://api.github.com/repos/huggingface/transformers/issues/593/events | https://github.com/huggingface/transformers/issues/593 | 441,132,791 | MDU6SXNzdWU0NDExMzI3OTE= | 593 | Embedding' object has no attribute 'shape' | {
"login": "Dhanachandra",
"id": 10828657,
"node_id": "MDQ6VXNlcjEwODI4NjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/10828657?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dhanachandra",
"html_url": "https://github.com/Dhanachandra",
"followers_url": "https://api.github.com/users/Dhanachandra/followers",
"following_url": "https://api.github.com/users/Dhanachandra/following{/other_user}",
"gists_url": "https://api.github.com/users/Dhanachandra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dhanachandra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dhanachandra/subscriptions",
"organizations_url": "https://api.github.com/users/Dhanachandra/orgs",
"repos_url": "https://api.github.com/users/Dhanachandra/repos",
"events_url": "https://api.github.com/users/Dhanachandra/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dhanachandra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@Dhanachandra Same issue with another pre-trained BERT model.\r\nHave you managed to solve that?",
"> @Dhanachandra Same issue with another pre-trained BERT model.\r\n> Have you managed to solve that?\r\n\r\n@Dhanachandra I've just found the solution: this exception occurs at \"modeling.py\" module. It's because of tensorflow->pytorch transformation. You need to find rows where 'shape' is used in modeling.py (you'll see its path in error logs) and delete it (it's somewhere in try... assert ... except..., just delete it).",
"You can use the following code: \r\n\r\ntf_path = 'pubmed_pmc_470k/biobert_model.ckpt'\r\nconfig_path = 'pubmed_pmc_470k/bert_config.json'\r\npytorch_dump_path = 'pytorch_model/pytorch_model.bin'\r\n# Save pytorch-model\r\n\r\nimport os\r\nimport re\r\nimport argparse\r\nimport tensorflow as tf\r\nimport torch\r\nimport numpy as np\r\n\r\nfrom pytorch_pretrained_bert import BertConfig, BertForPreTraining\r\n\r\ndef convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path):\r\n config_path = os.path.abspath(bert_config_file)\r\n tf_path = os.path.abspath(tf_checkpoint_path)\r\n print(\"Converting TensorFlow checkpoint from {} with config at {}\".format(tf_path, config_path))\r\n # Load weights from TF model\r\n init_vars = tf.train.list_variables(tf_path)\r\n excluded = ['BERTAdam','_power','global_step']\r\n init_vars = list(filter(lambda x:all([True if e not in x[0] else False for e in excluded]),init_vars))\r\n names = []\r\n arrays = []\r\n for name, shape in init_vars:\r\n print(\"Loading TF weight {} with shape {}\".format(name, shape))\r\n array = tf.train.load_variable(tf_path, name)\r\n names.append(name)\r\n arrays.append(array)\r\n\r\n # Initialise PyTorch model\r\n config = BertConfig.from_json_file(bert_config_file)\r\n print(\"Building PyTorch model from configuration: {}\".format(str(config)))\r\n model = BertForPreTraining(config)\r\n\r\n for name, array in zip(names, arrays):\r\n name = name.split('/')\r\n # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v\r\n # which are not required for using pretrained model\r\n if any(n in [\"adam_v\", \"adam_m\", \"global_step\"] for n in name):\r\n print(\"Skipping {}\".format(\"/\".join(name)))\r\n continue\r\n pointer = model\r\n for m_name in name:\r\n if re.fullmatch(r'[A-Za-z]+_\\d+', m_name):\r\n l = re.split(r'_(\\d+)', m_name)\r\n else:\r\n l = [m_name]\r\n if l[0] == 'kernel' or l[0] == 'gamma':\r\n pointer = getattr(pointer, 'weight')\r\n elif l[0] == 'output_bias' or l[0] == 'beta':\r\n pointer = getattr(pointer, 'bias')\r\n elif l[0] == 'output_weights':\r\n pointer = getattr(pointer, 'weight')\r\n else:\r\n pointer = getattr(pointer, l[0])\r\n if len(l) >= 2:\r\n num = int(l[1])\r\n pointer = pointer[num]\r\n if m_name[-11:] == '_embeddings':\r\n pointer = getattr(pointer, 'weight')\r\n elif m_name == 'kernel':\r\n array = np.transpose(array)\r\n try:\r\n assert pointer.shape == array.shape\r\n except AssertionError as e:\r\n e.args += (pointer.shape, array.shape)\r\n raise\r\n print(\"Initialize PyTorch weight {}\".format(name))\r\n pointer.data = torch.from_numpy(array)\r\n\r\n # Save pytorch-model\r\n print(\"Save PyTorch model to {}\".format(pytorch_dump_path))\r\n torch.save(model.state_dict(), pytorch_dump_path)\r\n\r\nconvert_tf_checkpoint_to_pytorch(tf_path, config_path, pytorch_dump_path) `",
"@DmLitov4 ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,557 | 1,665 | 1,564 | NONE | null | While running the script to convert the Tensorflow checkpoints to Pytorch Model.
Model path: https://github.com/naver/biobert-pretrained/releases/download/v1.0-pubmed-pmc/biobert_pubmed_pmc.tar.gz
python pytorch_pretrained_BERT/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py \
--tf_checkpoint_path pubmed_pmc_470k/biobert_model.ckpt \
--bert_config_file pubmed_pmc_470k/bert_config.json \
--pytorch_dump_path pytorch_model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/593/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/592 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/592/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/592/comments | https://api.github.com/repos/huggingface/transformers/issues/592/events | https://github.com/huggingface/transformers/issues/592 | 441,030,704 | MDU6SXNzdWU0NDEwMzA3MDQ= | 592 | Can the use of [SEP] reduce the information extraction between the sentences? | {
"login": "RomanShen",
"id": 23472425,
"node_id": "MDQ6VXNlcjIzNDcyNDI1",
"avatar_url": "https://avatars.githubusercontent.com/u/23472425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RomanShen",
"html_url": "https://github.com/RomanShen",
"followers_url": "https://api.github.com/users/RomanShen/followers",
"following_url": "https://api.github.com/users/RomanShen/following{/other_user}",
"gists_url": "https://api.github.com/users/RomanShen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RomanShen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RomanShen/subscriptions",
"organizations_url": "https://api.github.com/users/RomanShen/orgs",
"repos_url": "https://api.github.com/users/RomanShen/repos",
"events_url": "https://api.github.com/users/RomanShen/events{/privacy}",
"received_events_url": "https://api.github.com/users/RomanShen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think so. Ultimately you should have s1 and s2 in input, your [CLS] + s1 + s2 + [SEP] will be equivalent to `[CLS] + s1 + [SEP]` in `[CLS] + s1 + [SEP] + s2 + [SEP]` where `s1` now is the concatenation of `s1` and `s2`. I don't think that's what you want to do."
] | 1,557 | 1,558 | 1,558 | NONE | null | Hello. I know that [CLS] means the start of a sentence and [SEP] makes BERT know the second sentence has begun. [SEP] can’t stop one sentence from extracting information from another sentence. However, I have a question.
If I have 2 sentences, which are s1 and s2, and our fine-tuning task is the same. In one way, I add special tokens and the input looks like [CLS]+s1+[SEP] + s2 + [SEP]. In another, I make the input look like [CLS] + s1 + s2 + [SEP]. When I input them to BERT respectively, what is the difference between them? Will the s1 in second one integrate more information from s2 than the s1 in first one does? Will the token embeddings change a lot between the 2 methods?
Thanks for any help! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/592/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/591 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/591/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/591/comments | https://api.github.com/repos/huggingface/transformers/issues/591/events | https://github.com/huggingface/transformers/issues/591 | 441,018,048 | MDU6SXNzdWU0NDEwMTgwNDg= | 591 | What is the use of [SEP]? | {
"login": "RomanShen",
"id": 23472425,
"node_id": "MDQ6VXNlcjIzNDcyNDI1",
"avatar_url": "https://avatars.githubusercontent.com/u/23472425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RomanShen",
"html_url": "https://github.com/RomanShen",
"followers_url": "https://api.github.com/users/RomanShen/followers",
"following_url": "https://api.github.com/users/RomanShen/following{/other_user}",
"gists_url": "https://api.github.com/users/RomanShen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RomanShen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RomanShen/subscriptions",
"organizations_url": "https://api.github.com/users/RomanShen/orgs",
"repos_url": "https://api.github.com/users/RomanShen/repos",
"events_url": "https://api.github.com/users/RomanShen/events{/privacy}",
"received_events_url": "https://api.github.com/users/RomanShen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@RomanShen What is your observation on your question"
] | 1,557 | 1,558 | 1,557 | NONE | null | Hello. I know that [CLS] means the start of a sentence and [SEP] makes BERT know the second sentence has begun. [SEP] can’t stop one sentence from extracting information from another sentence. However, I have a question.
If I have 2 sentences, which are s1 and s2., and our fine-tuning task is the same. In one way, I add special tokens and the input looks like [CLS]+s1+[SEP] + s2 + [SEP]. In another, I make the input look like [CLS] + s1 + s2 + [SEP]. When I input them to BERT respectively, what is the difference between them? Will the s1 in second one integrate more information from s2 than the s1 in first one does? Will the token embeddings change a lot between the 2 methods?
Thanks for any help! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/591/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/590 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/590/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/590/comments | https://api.github.com/repos/huggingface/transformers/issues/590/events | https://github.com/huggingface/transformers/pull/590 | 440,755,978 | MDExOlB1bGxSZXF1ZXN0Mjc2MjMwNzk2 | 590 | Fix for computing t_total in examples | {
"login": "lukovnikov",
"id": 1732910,
"node_id": "MDQ6VXNlcjE3MzI5MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1732910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukovnikov",
"html_url": "https://github.com/lukovnikov",
"followers_url": "https://api.github.com/users/lukovnikov/followers",
"following_url": "https://api.github.com/users/lukovnikov/following{/other_user}",
"gists_url": "https://api.github.com/users/lukovnikov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukovnikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukovnikov/subscriptions",
"organizations_url": "https://api.github.com/users/lukovnikov/orgs",
"repos_url": "https://api.github.com/users/lukovnikov/repos",
"events_url": "https://api.github.com/users/lukovnikov/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukovnikov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closed in favour of #604 "
] | 1,557 | 1,557 | 1,557 | CONTRIBUTOR | null | Examples had wrongly computed t_total, resulting in warning messages (Issue #556 )
Added fixes in several examples but:
- only tested MRPC in `run_classifier.py` so far
- `finetune_on_pregenerated.py` still needs fixing (not sure why lines 221-227 are as they are) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/590/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/590",
"html_url": "https://github.com/huggingface/transformers/pull/590",
"diff_url": "https://github.com/huggingface/transformers/pull/590.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/590.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/589 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/589/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/589/comments | https://api.github.com/repos/huggingface/transformers/issues/589/events | https://github.com/huggingface/transformers/issues/589 | 440,702,570 | MDU6SXNzdWU0NDA3MDI1NzA= | 589 | Can't save converted checkpoint | {
"login": "Gal1eo",
"id": 47275922,
"node_id": "MDQ6VXNlcjQ3Mjc1OTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/47275922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gal1eo",
"html_url": "https://github.com/Gal1eo",
"followers_url": "https://api.github.com/users/Gal1eo/followers",
"following_url": "https://api.github.com/users/Gal1eo/following{/other_user}",
"gists_url": "https://api.github.com/users/Gal1eo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gal1eo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gal1eo/subscriptions",
"organizations_url": "https://api.github.com/users/Gal1eo/orgs",
"repos_url": "https://api.github.com/users/Gal1eo/repos",
"events_url": "https://api.github.com/users/Gal1eo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gal1eo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,557 | 1,562 | 1,562 | NONE | null | Thank you for creating the pytorch version of BERT. But there is a problem when I use the convert_tf_checkpoint_to_pytorch script, I can't find any files created under the pytorch_dumpy_path. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/589/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/588 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/588/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/588/comments | https://api.github.com/repos/huggingface/transformers/issues/588/events | https://github.com/huggingface/transformers/issues/588 | 440,677,033 | MDU6SXNzdWU0NDA2NzcwMzM= | 588 | installation error | {
"login": "kbulutozler",
"id": 34663649,
"node_id": "MDQ6VXNlcjM0NjYzNjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/34663649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kbulutozler",
"html_url": "https://github.com/kbulutozler",
"followers_url": "https://api.github.com/users/kbulutozler/followers",
"following_url": "https://api.github.com/users/kbulutozler/following{/other_user}",
"gists_url": "https://api.github.com/users/kbulutozler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kbulutozler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kbulutozler/subscriptions",
"organizations_url": "https://api.github.com/users/kbulutozler/orgs",
"repos_url": "https://api.github.com/users/kbulutozler/repos",
"events_url": "https://api.github.com/users/kbulutozler/events{/privacy}",
"received_events_url": "https://api.github.com/users/kbulutozler/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Which commands are you running?",
"I am so sorry that I took your time, I accidentally posted this here. "
] | 1,557 | 1,557 | 1,557 | NONE | null | Hi, i am getting an error after following the installation orders you stated at read me.
My output's error message is here:
> error: command '/usr/bin/nvcc' failed with exit status 1
> error
> Cleaning up...
> Removing source in /tmp/pip-req-build-837wsq53
> Removed build tracker '/tmp/pip-req-tracker-txkml2po'
> Command "/home/ubuntu/anaconda3/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-req-build-837wsq53/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" --cpp_ext --cuda_ext install --record /tmp/pip-record-qpb_35xo/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-req-build-837wsq53/
>
Whole output is here:
> error: command '/usr/bin/nvcc' failed with exit status 1
> error
> Cleaning up...
> Removing source in /tmp/pip-req-build-837wsq53
> Removed build tracker '/tmp/pip-req-tracker-txkml2po'
> Command "/home/ubuntu/anaconda3/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-req-build-837wsq53/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" --cpp_ext --cuda_ext install --record /tmp/pip-record-qpb_35xo/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-req-build-837wsq53/
> Exception information:
> Traceback (most recent call last):
> File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pip/_internal/cli/base_command.py", line 143, in main
> status = self.run(options, args)
> File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pip/_internal/commands/install.py", line 366, in run
> use_user_site=options.use_user_site,
> File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pip/_internal/req/__init__.py", line 49, in install_given_reqs
> **kwargs
> File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pip/_internal/req/req_install.py", line 791, in install
> spinner=spinner,
> File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/pip/_internal/utils/misc.py", line 705, in call_subprocess
> % (command_desc, proc.returncode, cwd))
> pip._internal.exceptions.InstallationError: Command "/home/ubuntu/anaconda3/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-req-build-837wsq53/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" --cpp_ext --cuda_ext install --record /tmp/pip-record-qpb_35xo/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-req-build-837wsq53/
> 1 location(s) to search for versions of pip:
> * https://pypi.org/simple/pip/
> Getting page https://pypi.org/simple/pip/
> Starting new HTTPS connection (1): pypi.org:443
> https://pypi.org:443 "GET /simple/pip/ HTTP/1.1" 200 11064
> Analyzing links from page https://pypi.org/simple/pip/
> Found link https://files.pythonhosted.org/packages/3d/9d/1e313763bdfb6a48977b65829c6ce2a43eaae29ea2f907c8bbef024a7219/pip-0.2.tar.gz#sha256=88bb8d029e1bf4acd0e04d300104b7440086f94cc1ce1c5c3c31e3293aee1f81 (from https://pypi.org/simple/pip/), version: 0.2
> Found link https://files.pythonhosted.org/packages/18/ad/c0fe6cdfe1643a19ef027c7168572dac6283b80a384ddf21b75b921877da/pip-0.2.1.tar.gz#sha256=83522005c1266cc2de97e65072ff7554ac0f30ad369c3b02ff3a764b962048da (from https://pypi.org/simple/pip/),version: 0.2.1
> Found link https://files.pythonhosted.org/packages/17/05/f66144ef69b436d07f8eeeb28b7f77137f80de4bf60349ec6f0f9509e801/pip-0.3.tar.gz#sha256=183c72455cb7f8860ac1376f8c4f14d7f545aeab8ee7c22cd4caf79f35a2ed47 (from https://pypi.org/simple/pip/), version: 0.3
> Found link https://files.pythonhosted.org/packages/0a/bb/d087c9a1415f8726e683791c0b2943c53f2b76e69f527f2e2b2e9f9e7b5c/pip-0.3.1.tar.gz#sha256=34ce534f17065c78f980702928e988a6b6b2d8a9851aae5f1571a1feb9bb58d8 (from https://pypi.org/simple/pip/),version: 0.3.1
> Found link https://files.pythonhosted.org/packages/cf/c3/153571aaac6cf999f4bb09c019b1ff379b7b599ea833813a41c784eec995/pip-0.4.tar.gz#sha256=28fc67558874f71fddda7168f73595f1650523dce3bc5bf189713ecdfc1e456e (from https://pypi.org/simple/pip/), version: 0.4
> Found link https://files.pythonhosted.org/packages/8d/c7/f05c87812fa5d9562ecbc5f4f1fc1570444f53c81c834a7f662af406e3c1/pip-0.5.tar.gz#sha256=328d8412782f22568508a0d0c78a49c9920a82e44c8dfca49954fe525c152b2a (from https://pypi.org/simple/pip/), version: 0.5
> Found link https://files.pythonhosted.org/packages/9a/aa/f536b6d14fe03343367da2ff44eee28f340ae650cd017ca088b6be13084a/pip-0.5.1.tar.gz#sha256=e27650538c41fe1007a41abd4cfd0f905b822622cbe1f8e7e09d1215af207694 (from https://pypi.org/simple/pip/),version: 0.5.1
> Found link https://files.pythonhosted.org/packages/db/e6/fdf7be8a17b032c533d3f91e91e2c63dd81d3627cbe4113248a00c2d39d8/pip-0.6.tar.gz#sha256=4cf47db6815b2f435d1f44e1f35ff04823043f6161f7df9aec71a123b0c47f0d (from https://pypi.org/simple/pip/), version: 0.6
> Found link https://files.pythonhosted.org/packages/91/cd/105f4d3c75d0ae18e12623acc96f42168aaba408dd6e43c4505aa21f8e37/pip-0.6.1.tar.gz#sha256=efe47e84ffeb0ea4804f9858b8a94bebd07f5452f907ebed36d03aed06a9f9ec (from https://pypi.org/simple/pip/),version: 0.6.1
> Found link https://files.pythonhosted.org/packages/1c/c7/c0e1a9413c37828faf290f29a85a4d6034c145cc04bf1622ba8beb662ad8/pip-0.6.2.tar.gz#sha256=1c1a504d7e70d2c24246f95bd16e3d5fcec740fd144df69a407bf65a2ee67586 (from https://pypi.org/simple/pip/),version: 0.6.2
> Found link https://files.pythonhosted.org/packages/3f/af/c4b9d49fb0f286996b28dbc0955c3ad359794697eb98e0e69863908070b0/pip-0.6.3.tar.gz#sha256=1a6df71eb29b98cba11bde6d6a0d8c6dd8b0518e74ceb71fb31ea4fbb42fd313 (from https://pypi.org/simple/pip/),version: 0.6.3
> Found link https://files.pythonhosted.org/packages/ec/7a/6fe91ff0079ad0437830957c459d52f3923e516f5b453218f2a93d09a427/pip-0.7.tar.gz#sha256=ceaea0b9e494d893c8a191895301b79c1db33e41f14d3ad93e3d28a8b4e9bf27 (from https://pypi.org/simple/pip/), version: 0.7
> Found link https://files.pythonhosted.org/packages/a5/63/11303863c2f5e9d9a15d89fcf7513a4b60987007d418862e0fb65c09fff7/pip-0.7.1.tar.gz#sha256=f54f05aa17edd0036de433c44892c8fedb1fd2871c97829838feb995818d24c3 (from https://pypi.org/simple/pip/),version: 0.7.1
> Found link https://files.pythonhosted.org/packages/cd/a9/1debaa96bbc1005c1c8ad3b79fec58c198d35121546ea2e858ce0894268a/pip-0.7.2.tar.gz#sha256=98df2eb779358412bbbae75980171ae85deebc846d87e244d086520b1212da09 (from https://pypi.org/simple/pip/),version: 0.7.2
> Found link https://files.pythonhosted.org/packages/74/54/f785c327fb3d163560a879b36edae5c78ee07806be282c9d4807f6be7dd1/pip-0.8.tar.gz#sha256=9017e4484a212dd4e1a43dd9f039dd7fc8338d4eea1c339d5ae1c80726de5b0f (from https://pypi.org/simple/pip/), version: 0.8
> Found link https://files.pythonhosted.org/packages/5c/79/5e8381cc3078bae92166f2ba96de8355e8c181926505ba8882f7b099a500/pip-0.8.1.tar.gz#sha256=7176a87f35675f6468341212f3b959bb51d23ea66eb1c3692bf746c45c716fa2 (from https://pypi.org/simple/pip/),version: 0.8.1
> Found link https://files.pythonhosted.org/packages/17/3e/0a98ab032991518741e7e712a719633e6ae160f51b3d3e855194530fd308/pip-0.8.2.tar.gz#sha256=f80a3549c048bc3bbcb47844826e9c7c6fcd87e77b92bef0d9e66d1b397c4962 (from https://pypi.org/simple/pip/),version: 0.8.2
> Found link https://files.pythonhosted.org/packages/f7/9a/943fc6d879ed7220bac2e7e53096bfe78abec88d77f2f516400e0129679e/pip-0.8.3.tar.gz#sha256=1be2e18edd38aa75b5e4ef38a99ec33ba9247177cfcb4a6d2d2b3e73430e3001 (from https://pypi.org/simple/pip/),version: 0.8.3
> Found link https://files.pythonhosted.org/packages/24/33/6eb675fb6db7b71d69d6928b33dea61b8bf5cfe1e5649be70ec84ce2fc09/pip-1.0.tar.gz#sha256=34ba07e2d14ba86d5088ba896ac80bed845a9b276ab8acb279b8d99bc77fec8e (from https://pypi.org/simple/pip/), version: 1.0
> Found link https://files.pythonhosted.org/packages/10/d9/f584e6107ef98ad7eaaaa5d0f756bfee12561fa6a4712ffdb7209e0e1fd4/pip-1.0.1.tar.gz#sha256=37d2f18213d3845d2038dd3686bc71fc12bb41ad66c945a8b0dfec2879f3497b (from https://pypi.org/simple/pip/),version: 1.0.1
> Found link https://files.pythonhosted.org/packages/16/90/5e6f80364d8a656f60681dfb7330298edef292d43e1499bcb3a4c71ff0b9/pip-1.0.2.tar.gz#sha256=a6ed9b36aac2f121c01a2c9e0307a9e4d9438d100a407db701ac65479a3335d2 (from https://pypi.org/simple/pip/),version: 1.0.2
> Found link https://files.pythonhosted.org/packages/25/57/0d42cf5307d79913a082c5c4397d46f3793bc35e1138a694136d6e31be99/pip-1.1.tar.gz#sha256=993804bb947d18508acee02141281c77d27677f8c14eaa64d6287a1c53ef01c8 (from https://pypi.org/simple/pip/), version: 1.1
> Found link https://files.pythonhosted.org/packages/ba/c3/4e1f892f41aaa217fe0d1f827fa05928783349c69f3cc06fdd68e112678a/pip-1.2.tar.gz#sha256=2b168f1987403f1dc6996a1f22a6f6637b751b7ab6ff27e78380b8d6e70aa314 (from https://pypi.org/simple/pip/), version: 1.2
> Found link https://files.pythonhosted.org/packages/c3/a2/a63244da32afd9ce9a8ca1bd86e71610039adea8b8314046ebe5047527a6/pip-1.2.1.tar.gz#sha256=12a9302acfca62cdc7bc5d83386cac3e0581db61ac39acdb3a4e766a16b88eb1 (from https://pypi.org/simple/pip/),version: 1.2.1
> Found link https://files.pythonhosted.org/packages/00/45/69d4f2602b80550bfb26cfd2f62c2f05b3b5c7352705d3766cd1e5b27648/pip-1.3.tar.gz#sha256=d6a13c5be316cb21a0243047c7f163f47e88973ebccff8d32e63ca1bf4d9321c (from https://pypi.org/simple/pip/), version: 1.3
> Found link https://files.pythonhosted.org/packages/5b/ce/f5b98104f1c10d868936c25f7c597f492d4371aa9ad5fb61a94954ee7208/pip-1.3.1.tar.gz#sha256=145eaa5d1ea1b062663da1f3a97780d7edea4c63c68a37c463b1deedf7bb4957 (from https://pypi.org/simple/pip/),version: 1.3.1
> Found link https://files.pythonhosted.org/packages/5f/d0/3b3958f6a58783bae44158b2c4c7827ae89abaecdd4bed12cff402620b9a/pip-1.4.tar.gz#sha256=1fd43cbf07d95ddcecbb795c97a1674b3ddb711bb4a67661284a5aa765aa1b97 (from https://pypi.org/simple/pip/), version: 1.4
> Found link https://files.pythonhosted.org/packages/3f/f8/da390e0df72fb61d176b25a4b95262e3dcc14bda0ad25ac64d56db38b667/pip-1.4.1.tar.gz#sha256=4e7a06554711a624c35d0c646f63674b7f6bfc7f80221bf1eb1f631bd890d04e (from https://pypi.org/simple/pip/),version: 1.4.1
> Found link https://files.pythonhosted.org/packages/4f/7d/e53bc80667378125a9e07d4929a61b0bd7128a1129dbe6f07bb3228652a3/pip-1.5.tar.gz#sha256=25f81d1a0e55d3b1709818dd57fdfb954b028f229f09bd69cb0bc80a8e03e048 (from https://pypi.org/simple/pip/), version: 1.5
> Found link https://files.pythonhosted.org/packages/44/5d/1dca53b5de6d287e7eb99bd174bb022eb6cb0d6ca6e19ca6b16655dde8c2/pip-1.5.1-py2.py3-none-any.whl#sha256=00960db3b0b8724dd37fe37cfb9c72ecb8f59fab9db7d17c5c1e89a1adab49ce (from https://pypi.org/simple/pip/), version: 1.5.1
> Found link https://files.pythonhosted.org/packages/21/3f/d86a600c9b2f41a75caacf768a24130f343def97652de2345da15ef7911f/pip-1.5.1.tar.gz#sha256=e60e936fbc101d56668c6134c1f2b5b40fcbec8b4fc4ca7fc34842b6b4c5c130 (from https://pypi.org/simple/pip/),version: 1.5.1
> Found link https://files.pythonhosted.org/packages/3d/1f/227d77d5e9ed2df5162de4ba3616799a351eccb1ecd668ae824dd26153a1/pip-1.5.2-py2.py3-none-any.whl#sha256=6903909ccdcdbc3297b74118590e71344d6d262827acd1f5c0e2fcfce9807499 (from https://pypi.org/simple/pip/), version: 1.5.2
> Found link https://files.pythonhosted.org/packages/ed/94/391a003107f6ec997c314199d03bff1c105af758ee490e3255353574487b/pip-1.5.2.tar.gz#sha256=2a8a3e08e652d3a40edbb39264bf01f8ff3c32520a79113357cca1f30533f738 (from https://pypi.org/simple/pip/),version: 1.5.2
> Found link https://files.pythonhosted.org/packages/df/e9/bdb53d44fad1465b43edaf6bc7dd3027ed5af81405cc97603fdff0721ebb/pip-1.5.3-py2.py3-none-any.whl#sha256=f0037aed3ce6cf96b9e9117d42e967a74bea9ebe19088a2fdea5de93d5762fee (from https://pypi.org/simple/pip/), version: 1.5.3
> Found link https://files.pythonhosted.org/packages/55/de/671a48ad313c808623041fc475f7c8f7610401d9f573f06b40eeb84e74e3/pip-1.5.3.tar.gz#sha256=dc53b4d28b88556a37cd73052b6d1d08cc644c6724e37c4d38a2e3c03c5440b2 (from https://pypi.org/simple/pip/),version: 1.5.3
> Found link https://files.pythonhosted.org/packages/a9/9a/9aa19fe00de4c025562e5fb3796ff8520165a7dd1a5662c6ec9816e1ae99/pip-1.5.4-py2.py3-none-any.whl#sha256=fb7282556a42e84464f2e963a859ac4012d8134ba6218b70c1d82d145fcfa82f (from https://pypi.org/simple/pip/), version: 1.5.4
> Found link https://files.pythonhosted.org/packages/78/d8/6e58a7130d457edadb753a0ea5708e411c100c7e94e72ad4802feeef735c/pip-1.5.4.tar.gz#sha256=70208a250bb4afdbbdd74c3ac35d4ab9ba1eb6852d02567a6a87f2f5104e30b9 (from https://pypi.org/simple/pip/),version: 1.5.4
> Found link https://files.pythonhosted.org/packages/ce/c2/10d996b9c51b126a9f0bb9e14a9edcdd5c88888323c0685bb9b392b6c47c/pip-1.5.5-py2.py3-none-any.whl#sha256=fe7a5808190067b2598d85def9b83db46e5d64a00848ad843e107c36e1db4ae6 (from https://pypi.org/simple/pip/), version: 1.5.5
> Found link https://files.pythonhosted.org/packages/88/01/a442fde40bd9aaf837612536f16ab751fac628807fd718690795b8ade77d/pip-1.5.5.tar.gz#sha256=4b7f5124364ae9b5ba833dcd8813a84c1c06fba1d7c8543323c7af4b33188eca (from https://pypi.org/simple/pip/),version: 1.5.5
> Found link https://files.pythonhosted.org/packages/3f/08/7347ca4021e7fe0f1ab8f93cbc7d2a7a7350012300ad0e0227d55625e2b8/pip-1.5.6-py2.py3-none-any.whl#sha256=fbc1351ffedf09ca7560428758845a88d648b9730b63ce9e5df53a7c89f039a4 (from https://pypi.org/simple/pip/), version: 1.5.6
> Found link https://files.pythonhosted.org/packages/45/db/4fb9a456b4ec4d3b701456ef562b9d72d76b6358e0c1463d17db18c5b772/pip-1.5.6.tar.gz#sha256=b1a4ae66baf21b7eb05a5e4f37c50c2706fa28ea1f8780ce8efe14dcd9f1726c (from https://pypi.org/simple/pip/),version: 1.5.6
> Found link https://files.pythonhosted.org/packages/dc/7c/21191b5944b917b66e4e4e06d74f668d814b6e8a3ff7acd874479b6f6b3d/pip-6.0-py2.py3-none-any.whl#sha256=5ec6732505bd8be49fe1f8ad557b88253ffb085736396df4d6bea753fc2a8f2c (from https://pypi.org/simple/pip/), version: 6.0
> Found link https://files.pythonhosted.org/packages/38/fd/065c66a88398f240e344fdf496b9707f92d75f88eedc3d10ff847b28a657/pip-6.0.tar.gz#sha256=6103897f1bb68d3f933edd60f3e3830c4ea6b8abf7a4b500db148921b11f6c9b (from https://pypi.org/simple/pip/), version: 6.0
> Found link https://files.pythonhosted.org/packages/e9/7a/cdbc1a12ed52410d557e48d4646f4543e9e991ff32d2374dc6db849aa617/pip-6.0.1-py2.py3-none-any.whl#sha256=322aea7d1f7b9ee68ad87ac4704cad5df97f77e70668c0bd18f964c5daa78173 (from https://pypi.org/simple/pip/), version: 6.0.1
> Found link https://files.pythonhosted.org/packages/4d/c3/8675b90cd89b9b222062f4f6c7e9d48b0387f5b35cbf747a74403a883e56/pip-6.0.1.tar.gz#sha256=fa2f7c68da4a405d673aa38542f9df009d60026db4f532429ac9cbfbda1f959d (from https://pypi.org/simple/pip/),version: 6.0.1
> Found link https://files.pythonhosted.org/packages/71/3c/b5a521e5e99cfff091e282231591f21193fd80de079ec5fb8ed9c6614044/pip-6.0.2-py2.py3-none-any.whl#sha256=7d17b0f267f7c9cd17cd2924bbbe2b4a3d407322c0e09084ca3f1295c1fed50d (from https://pypi.org/simple/pip/), version: 6.0.2
> Found link https://files.pythonhosted.org/packages/4c/5a/f9e8e3de0153282c7cb54a9b991af225536ac914bac858ca664cf883bb3e/pip-6.0.2.tar.gz#sha256=6fa90667706a679e3dc75b27a51fddafa64401c45e96f8ae6c20978183290077 (from https://pypi.org/simple/pip/),version: 6.0.2
> Found link https://files.pythonhosted.org/packages/73/cb/3eebf42003791df29219a3dfa1874572aa16114b44c9b1b0ac66bf96e8c0/pip-6.0.3-py2.py3-none-any.whl#sha256=b72655b6ac6aef1c86dd07f51e8ace8d7aabd6a1c4ff88db87155276fa32a073 (from https://pypi.org/simple/pip/), version: 6.0.3
> Found link https://files.pythonhosted.org/packages/ce/63/8d99ae60d11ae1a65f5d4fc39a529a598bd3b8e067132210cb0c4d9e9f74/pip-6.0.3.tar.gz#sha256=b091a35f5fa0faffac0b27b97e1e1e93ffe63b463c2ea8dbde0c1fb987933614 (from https://pypi.org/simple/pip/),version: 6.0.3
> Found link https://files.pythonhosted.org/packages/c5/0e/c974206726542bc495fc7443dd97834a6d14c2f0cba183fcfcd01075225a/pip-6.0.4-py2.py3-none-any.whl#sha256=8dfd95de29a7a3bb1e7d368cc83d566938eb210b04d553ebfe5e3a422f4aec65 (from https://pypi.org/simple/pip/), version: 6.0.4
> Found link https://files.pythonhosted.org/packages/02/a1/c90f19910ee153d7a0efca7216758121118d7e93084276541383fe9ca82e/pip-6.0.4.tar.gz#sha256=1dbbff9c369e510c7468ab68ba52c003f68f83c99c2f8259acd51099e8799f1e (from https://pypi.org/simple/pip/),version: 6.0.4
> Found link https://files.pythonhosted.org/packages/e9/1b/c6a375a337fb576784cdea3700f6c3eaf1420f0a01458e6e034cc178a84a/pip-6.0.5-py2.py3-none-any.whl#sha256=b2c20e3a2a43b2bbb1d19ad98be27eccc7b0f0ece016da602ccaa757a862b0e2 (from https://pypi.org/simple/pip/), version: 6.0.5
> Found link https://files.pythonhosted.org/packages/19/f2/58628768f618c8c9fea878e0fb97730c0b8a838d3ab3f325768bf12dac94/pip-6.0.5.tar.gz#sha256=3bf42d28be9085ab2e9aecfd69a6da2d31563fe833304bf71a620a30c38ab8a2 (from https://pypi.org/simple/pip/),version: 6.0.5
> Found link https://files.pythonhosted.org/packages/64/fc/4a49ccb18f55a0ceeb76e8d554bd4563217117492997825d194ed0017cc1/pip-6.0.6-py2.py3-none-any.whl#sha256=fb04f8afe1ba57626783f0c8e2f3d46bbaebaa446fcf124f434e968a2fee595e (from https://pypi.org/simple/pip/), version: 6.0.6
> Found link https://files.pythonhosted.org/packages/f6/ce/d9e4e178b66c766c117f62ddf4fece019ef9d50127a8926d2f60300d615e/pip-6.0.6.tar.gz#sha256=3a14091299dcdb9bab9e9004ae67ac401f2b1b14a7c98de074ca74fdddf4bfa0 (from https://pypi.org/simple/pip/),version: 6.0.6
> Found link https://files.pythonhosted.org/packages/7a/8e/2bbd4fcf3ee06ee90ded5f39ec12f53165dfdb9ef25a981717ad38a16670/pip-6.0.7-py2.py3-none-any.whl#sha256=93a326304c7db749896bcef822bbbac1ab29dad5651c6d732e245975239890e6 (from https://pypi.org/simple/pip/), version: 6.0.7
> Found link https://files.pythonhosted.org/packages/52/85/b160ebdaa84378df6bb0176d4eed9f57edca662446174eead7a9e2e566d6/pip-6.0.7.tar.gz#sha256=35a5a43ac6b7af83ed47ea5731a365f43d350a3a7267e039e5f06b61d42ab3c2 (from https://pypi.org/simple/pip/),version: 6.0.7
> Found link https://files.pythonhosted.org/packages/63/65/55b71647adec1ad595bf0e5d76d028506dfc002df30c256f022ff7a660a5/pip-6.0.8-py2.py3-none-any.whl#sha256=3c22b0a8ff92727bd737a82f72700790591f177541df08c07bc1f90d6b72ac19 (from https://pypi.org/simple/pip/), version: 6.0.8
> Found link https://files.pythonhosted.org/packages/ef/8a/e3a980bc0a7f791d72c1302f65763ed300f2e14c907ac033e01b44c79e5e/pip-6.0.8.tar.gz#sha256=0d58487a1b7f5be2e5e965c11afbea1dc44ecec8069de03491a4d0d6c85f4551 (from https://pypi.org/simple/pip/),version: 6.0.8
> Found link https://files.pythonhosted.org/packages/24/fb/8a56a46243514681e569bbafd8146fa383476c4b7c725c8598c452366f31/pip-6.1.0-py2.py3-none-any.whl#sha256=435a018f6d29e34d4f901bf4e6860d8a5fa1816b68d62008c18ca062a306db31 (from https://pypi.org/simple/pip/), version: 6.1.0
> Found link https://files.pythonhosted.org/packages/6c/84/432eb60bbcb414b9cdfcb135d5f4925e253c74e7d6916ada79990d6cc1a0/pip-6.1.0.tar.gz#sha256=89f120e2ab3d25ab70c36eb28ad4f280fc9ba71736e74d3055f609c1f9173768 (from https://pypi.org/simple/pip/),version: 6.1.0
> Found link https://files.pythonhosted.org/packages/67/f0/ba0fb41dbdbfc4aa3e0c16b40269aca6b9e3d59cacdb646218aa2e9b1d2c/pip-6.1.1-py2.py3-none-any.whl#sha256=a67e54aa0f26b6d62ccec5cc6735eff205dd0fed075f56ac3d3111e91e4467fc (from https://pypi.org/simple/pip/), version: 6.1.1
> Found link https://files.pythonhosted.org/packages/bf/85/871c126b50b8ee0b9819e8a63b614aedd264577e73478caedcd447e8f28c/pip-6.1.1.tar.gz#sha256=89f3b626d225e08e7f20d85044afa40f612eb3284484169813dc2d0631f2a556 (from https://pypi.org/simple/pip/),version: 6.1.1
> Found link https://files.pythonhosted.org/packages/5a/9b/56d3c18d0784d5f2bbd446ea2dc7ffa7476c35e3dc223741d20cfee3b185/pip-7.0.0-py2.py3-none-any.whl#sha256=309c48399c7d68501a10ef206abd6e5c541fedbf84b95435d9063bd454b39df7 (from https://pypi.org/simple/pip/), version: 7.0.0
> Found link https://files.pythonhosted.org/packages/c6/16/6475b142927ca5d03e3b7968efa5b0edd103e4684ecfde181a25f6fa2505/pip-7.0.0.tar.gz#sha256=7b46bfc1b95494731de306a688e2a7bc056d7fa7ad27e026908fb2ae67fed23d (from https://pypi.org/simple/pip/),version: 7.0.0
> Found link https://files.pythonhosted.org/packages/5a/10/bb7a32c335bceba636aa673a4c977effa1e73a79f88856459486d8d670cf/pip-7.0.1-py2.py3-none-any.whl#sha256=d26b8573ba1ac1ec99a9bdbdffee2ff2b06c7790815211d0eb4dc1462a089705 (from https://pypi.org/simple/pip/), version: 7.0.1
> Found link https://files.pythonhosted.org/packages/4a/83/9ae4362a80739657e0c8bb628ea3fa0214a9aba7c8590dacc301ea293f73/pip-7.0.1.tar.gz#sha256=cfec177552fdd0b2d12b72651c8e874f955b4c62c1c2c9f2588cbdc1c0d0d416 (from https://pypi.org/simple/pip/),version: 7.0.1
> Found link https://files.pythonhosted.org/packages/64/7f/7107800ae0919a80afbf1ecba21b90890431c3ee79d700adac3c79cb6497/pip-7.0.2-py2.py3-none-any.whl#sha256=83c869c5ab7113866e2d69641ec470d47f0faae68ca4550a289a4d3db515ad65 (from https://pypi.org/simple/pip/), version: 7.0.2
> Found link https://files.pythonhosted.org/packages/75/b1/66532c273bca0133e42c3b4540a1609289f16e3046f1830f18c60794d661/pip-7.0.2.tar.gz#sha256=ba28fa60b573a9444e7b78ccb3b0f261d1f66f46d20403f9dce37b18a6aed405 (from https://pypi.org/simple/pip/),version: 7.0.2
> Found link https://files.pythonhosted.org/packages/96/76/33a598ae42dd0554207d83c7acc60e3b166dbde723cbf282f1f73b7a127c/pip-7.0.3-py2.py3-none-any.whl#sha256=7b1cb03e827d58d2d05e68ea96a9e27487ed4b0afcd951ac6e40847ce94f0738 (from https://pypi.org/simple/pip/), version: 7.0.3
> Found link https://files.pythonhosted.org/packages/35/59/5b23115758ba0f2fc465c459611865173ef006202ba83f662d1f58ed2fb8/pip-7.0.3.tar.gz#sha256=b4c598825a6f6dc2cac65968feb28e6be6c1f7f1408493c60a07eaa731a0affd (from https://pypi.org/simple/pip/),version: 7.0.3
> Found link https://files.pythonhosted.org/packages/f7/c0/9f8dac88326609b4b12b304e8382f64f7d5af7735a00d2fac36cf135fc30/pip-7.1.0-py2.py3-none-any.whl#sha256=80c29f899d3a00a448d65f8158544d22935baec7159af8da1a4fa1490ced481d (from https://pypi.org/simple/pip/), version: 7.1.0
> Found link https://files.pythonhosted.org/packages/7e/71/3c6ece07a9a885650aa6607b0ebfdf6fc9a3ef8691c44b5e724e4eee7bf2/pip-7.1.0.tar.gz#sha256=d5275ba3221182a5dd1b6bcfbfc5ec277fb399dd23226d6fa018048f7e0f10f2 (from https://pypi.org/simple/pip/),version: 7.1.0
> Found link https://files.pythonhosted.org/packages/1c/56/094d563c508917081bccff365e4f621ba33073c1c13aca9267a43cfcaf13/pip-7.1.1-py2.py3-none-any.whl#sha256=ce13000878d34c1178af76cb8cf269e232c00508c78ed46c165dd5b0881615f4 (from https://pypi.org/simple/pip/), version: 7.1.1
> Found link https://files.pythonhosted.org/packages/3b/bb/b3f2a95494fd3f01d3b3ae530e7c0e910dc25e88e30787b0a5e10cbc0640/pip-7.1.1.tar.gz#sha256=b22fe3c93a13fc7c04f145a42fd2ad50a9e3e1b8a7eed2e2b1c66e540a0951da (from https://pypi.org/simple/pip/),version: 7.1.1
> Found link https://files.pythonhosted.org/packages/b2/d0/cd115fe345dd6f07ec1c780020a7dfe74966fceeb171e0f20d1d4905b0b7/pip-7.1.2-py2.py3-none-any.whl#sha256=b9d3983b5cce04f842175e30169d2f869ef12c3546fd274083a65eada4e9708c (from https://pypi.org/simple/pip/), version: 7.1.2
> Found link https://files.pythonhosted.org/packages/d0/92/1e8406c15d9372084a5bf79d96da3a0acc4e7fcf0b80020a4820897d2a5c/pip-7.1.2.tar.gz#sha256=ca047986f0528cfa975a14fb9f7f106271d4e0c3fe1ddced6c1db2e7ae57a477 (from https://pypi.org/simple/pip/),version: 7.1.2
> Found link https://files.pythonhosted.org/packages/00/ae/bddef02881ee09c6a01a0d6541aa6c75a226a4e68b041be93142befa0cd6/pip-8.0.0-py2.py3-none-any.whl#sha256=262ed1823eb7fbe3f18a9bedb4800e59c4ab9a6682aff8c37b5ee83ea840910b (from https://pypi.org/simple/pip/), version: 8.0.0
> Found link https://files.pythonhosted.org/packages/e3/2d/03c014d11e66628abf2fda5ca00f779cbe7b5292c5cd13d42a95b94aa9b8/pip-8.0.0.tar.gz#sha256=90112b296152f270cb8dddcd19b7b87488d9e002e8cf622e14c4da9c2f6319b1 (from https://pypi.org/simple/pip/),version: 8.0.0
> Found link https://files.pythonhosted.org/packages/45/9c/6f9a24917c860873e2ce7bd95b8f79897524353df51d5d920cd6b6c1ec33/pip-8.0.1-py2.py3-none-any.whl#sha256=dedaac846bc74e38a3253671f51a056331ffca1da70e3f48d8128f2aa0635bba (from https://pypi.org/simple/pip/), version: 8.0.1
> Found link https://files.pythonhosted.org/packages/ea/66/a3d6187bd307159fedf8575c0d9ee2294d13b1cdd11673ca812e6a2dda8f/pip-8.0.1.tar.gz#sha256=477c50b3e538a7ac0fa611fb8b877b04b33fb70d325b12a81b9dbf3eb1158a4d (from https://pypi.org/simple/pip/),version: 8.0.1
> Found link https://files.pythonhosted.org/packages/e7/a0/bd35f5f978a5e925953ce02fa0f078a232f0f10fcbe543d8cfc043f74fda/pip-8.0.2-py2.py3-none-any.whl#sha256=249a6f3194be8c2e8cb4d4be3f6fd16a9f1e3336218caffa8e7419e3816f9988 (from https://pypi.org/simple/pip/), version: 8.0.2
> Found link https://files.pythonhosted.org/packages/ce/15/ee1f9a84365423e9ef03d0f9ed0eba2fb00ac1fffdd33e7b52aea914d0f8/pip-8.0.2.tar.gz#sha256=46f4bd0d8dfd51125a554568d646fe4200a3c2c6c36b9f2d06d2212148439521 (from https://pypi.org/simple/pip/),version: 8.0.2
> Found link https://files.pythonhosted.org/packages/ae/d4/2b127310f5364610b74c28e2e6a40bc19e2d3c9a9a4e012d3e333e767c99/pip-8.0.3-py2.py3-none-any.whl#sha256=b0335bc837f9edb5aad03bd43d0973b084a1cbe616f8188dc23ba13234dbd552 (from https://pypi.org/simple/pip/), version: 8.0.3
> Found link https://files.pythonhosted.org/packages/22/f3/14bc87a4f6b5ec70b682765978a6f3105bf05b6781fa97e04d30138bd264/pip-8.0.3.tar.gz#sha256=30f98b66f3fe1069c529a491597d34a1c224a68640c82caf2ade5f88aa1405e8 (from https://pypi.org/simple/pip/),version: 8.0.3
> Found link https://files.pythonhosted.org/packages/1e/c7/78440b3fb882ed001e6e12d8770bd45e73d6eced4e57f7c072b829ce8a3d/pip-8.1.0-py2.py3-none-any.whl#sha256=a542b99e08002ead83200198e19a3983270357e1cb4fe704247990b5b35471dc (from https://pypi.org/simple/pip/), version: 8.1.0
> Found link https://files.pythonhosted.org/packages/3c/72/6981d5adf880adecb066a1a1a4c312a17f8d787a3b85446967964ac66d55/pip-8.1.0.tar.gz#sha256=d8faa75dd7d0737b16d50cd0a56dc91a631c79ecfd8d38b80f6ee929ec82043e (from https://pypi.org/simple/pip/),version: 8.1.0
> Found link https://files.pythonhosted.org/packages/31/6a/0f19a7edef6c8e5065f4346137cc2a08e22e141942d66af2e1e72d851462/pip-8.1.1-py2.py3-none-any.whl#sha256=44b9c342782ab905c042c207d995aa069edc02621ddbdc2b9f25954a0fdac25c (from https://pypi.org/simple/pip/), version: 8.1.1
> Found link https://files.pythonhosted.org/packages/41/27/9a8d24e1b55bd8c85e4d022da2922cb206f183e2d18fee4e320c9547e751/pip-8.1.1.tar.gz#sha256=3e78d3066aaeb633d185a57afdccf700aa2e660436b4af618bcb6ff0fa511798 (from https://pypi.org/simple/pip/),version: 8.1.1
> Found link https://files.pythonhosted.org/packages/9c/32/004ce0852e0a127f07f358b715015763273799bd798956fa930814b60f39/pip-8.1.2-py2.py3-none-any.whl#sha256=6464dd9809fb34fc8df2bf49553bb11dac4c13d2ffa7a4f8038ad86a4ccb92a1 (from https://pypi.org/simple/pip/), version: 8.1.2
> Found link https://files.pythonhosted.org/packages/e7/a8/7556133689add8d1a54c0b14aeff0acb03c64707ce100ecd53934da1aa13/pip-8.1.2.tar.gz#sha256=4d24b03ffa67638a3fa931c09fd9e0273ffa904e95ebebe7d4b1a54c93d7b732 (from https://pypi.org/simple/pip/),version: 8.1.2
> Found link https://files.pythonhosted.org/packages/3f/ef/935d9296acc4f48d1791ee56a73781271dce9712b059b475d3f5fa78487b/pip-9.0.0-py2.py3-none-any.whl#sha256=c856ac18ca01e7127456f831926dc67cc7d3ab663f4c13b1ec156e36db4de574 (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.0
> Found link https://files.pythonhosted.org/packages/5e/53/eaef47e5e2f75677c9de0737acc84b659b78a71c4086f424f55346a341b5/pip-9.0.0.tar.gz#sha256=f62fb70e7e000e46fce12aaeca752e5281a5446977fe5a75ab4189a43b3f8793 (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.0
> Found link https://files.pythonhosted.org/packages/b6/ac/7015eb97dc749283ffdec1c3a88ddb8ae03b8fad0f0e611408f196358da3/pip-9.0.1-py2.py3-none-any.whl#sha256=690b762c0a8460c303c089d5d0be034fb15a5ea2b75bdf565f40421f542fefb0 (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.1
> Found link https://files.pythonhosted.org/packages/11/b6/abcb525026a4be042b486df43905d6893fb04f05aac21c32c638e939e447/pip-9.0.1.tar.gz#sha256=09f243e1a7b461f654c26a725fa373211bb7ff17a9300058b205c61658ca940d (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.1
> Found link https://files.pythonhosted.org/packages/e7/f9/e801dcea22886cd513f6bd2e8f7e581bd6f67bb8e8f1cd8e7b92d8539280/pip-9.0.2-py2.py3-none-any.whl#sha256=b135491ddb061f39719b8472d8abb59c613816a2b86069c332db74d1cd208ab2 (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.2
> Found link https://files.pythonhosted.org/packages/e5/8f/3fc66461992dc9e9fcf5e005687d5f676729172dda640df2fd8b597a6da7/pip-9.0.2.tar.gz#sha256=88110a224e9d30e5d76592a0b2130ef10e7e67a6426e8617bb918fffbfe91fe5 (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.2
> Found link https://files.pythonhosted.org/packages/ac/95/a05b56bb975efa78d3557efa36acaf9cf5d2fd0ee0062060493687432e03/pip-9.0.3-py2.py3-none-any.whl#sha256=c3ede34530e0e0b2381e7363aded78e0c33291654937e7373032fda04e8803e5 (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.3
> Found link https://files.pythonhosted.org/packages/c4/44/e6b8056b6c8f2bfd1445cc9990f478930d8e3459e9dbf5b8e2d2922d64d3/pip-9.0.3.tar.gz#sha256=7bf48f9a693be1d58f49f7af7e0ae9fe29fd671cde8a55e6edca3581c4ef5796 (from https://pypi.org/simple/pip/) (requires-python:>=2.6,!=3.0.*,!=3.1.*,!=3.2.*), version: 9.0.3
> Found link https://files.pythonhosted.org/packages/4b/5a/8544ae02a5bd28464e03af045e8aabde20a7b02db1911a9159328e1eb25a/pip-10.0.0b1-py2.py3-none-any.whl#sha256=dbd5d24cd461be23429625085a36cc8732cbcac4d2aaf673031f80f6ac07d844 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.0b1
> Found link https://files.pythonhosted.org/packages/aa/6d/ffbb86abf18b750fb26f27eda7c7732df2aacaa669c420d2eb2ad6df3458/pip-10.0.0b1.tar.gz#sha256=8d6e63d8b99752e4b53f272b66f9cd7b59e2b288e9a863a61c48d167203a2656 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.0b1
> Found link https://files.pythonhosted.org/packages/97/72/1d514201e7d7fc7fff5aac3de9c7b892cd72fb4bf23fd983630df96f7412/pip-10.0.0b2-py2.py3-none-any.whl#sha256=79f55588912f1b2b4f86f96f11e329bb01b25a484e2204f245128b927b1038a7 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.0b2
> Found link https://files.pythonhosted.org/packages/32/67/572f642e6e42c580d3154964cfbab7d9322c23b0f417c6c01fdd206a2777/pip-10.0.0b2.tar.gz#sha256=ad6adec2150ce4aed8f6134d9b77d928fc848dbcb887fb1a455988cf99da5cae (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.0b2
> Found link https://files.pythonhosted.org/packages/62/a1/0d452b6901b0157a0134fd27ba89bf95a857fbda64ba52e1ca2cf61d8412/pip-10.0.0-py2.py3-none-any.whl#sha256=86a60a96d85e329962a9e6f6af612cbc11106293dbc83f119802b5bee9874cf3 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.0
> Found link https://files.pythonhosted.org/packages/e0/69/983a8e47d3dfb51e1463c1e962b2ccd1d74ec4e236e232625e353d830ed2/pip-10.0.0.tar.gz#sha256=f05a3eeea64bce94e85cc6671d679473d66288a4d37c3fcf983584954096b34f (from https://pypi.org/simple/pip/)(requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.0
> Found link https://files.pythonhosted.org/packages/0f/74/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4/pip-10.0.1-py2.py3-none-any.whl#sha256=717cdffb2833be8409433a93746744b59505f42146e8d37de6c62b430e25d6d7 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.1
> Found link https://files.pythonhosted.org/packages/ae/e8/2340d46ecadb1692a1e455f13f75e596d4eab3d11a57446f08259dee8f02/pip-10.0.1.tar.gz#sha256=f2bd08e0cd1b06e10218feaf6fef299f473ba706582eb3bd9d52203fdbd7ee68 (from https://pypi.org/simple/pip/)(requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*), version: 10.0.1
> Found link https://files.pythonhosted.org/packages/5f/25/e52d3f31441505a5f3af41213346e5b6c221c9e086a166f3703d2ddaf940/pip-18.0-py2.py3-none-any.whl#sha256=070e4bf493c7c2c9f6a08dd797dd3c066d64074c38e9e8a0fb4e6541f266d96c (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 18.0
> Found link https://files.pythonhosted.org/packages/69/81/52b68d0a4de760a2f1979b0931ba7889202f302072cc7a0d614211bc7579/pip-18.0.tar.gz#sha256=a0e11645ee37c90b40c46d607070c4fd583e2cd46231b1c06e389c5e814eed76 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 18.0
> Found link https://files.pythonhosted.org/packages/c2/d7/90f34cb0d83a6c5631cf71dfe64cc1054598c843a92b400e55675cc2ac37/pip-18.1-py2.py3-none-any.whl#sha256=7909d0a0932e88ea53a7014dfd14522ffef91a464daaaf5c573343852ef98550 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 18.1
> Found link https://files.pythonhosted.org/packages/45/ae/8a0ad77defb7cc903f09e551d88b443304a9bd6e6f124e75c0fbbf6de8f7/pip-18.1.tar.gz#sha256=c0a292bd977ef590379a3f05d7b7f65135487b67470f6281289a94e015650ea1 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 18.1
> Found link https://files.pythonhosted.org/packages/60/64/73b729587b6b0d13e690a7c3acd2231ee561e8dd28a58ae1b0409a5a2b20/pip-19.0-py2.py3-none-any.whl#sha256=249ab0de4c1cef3dba4cf3f8cca722a07fc447b1692acd9f84e19c646db04c9a (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0
> Found link https://files.pythonhosted.org/packages/11/31/c483614095176ddfa06ac99c2af4171375053b270842c7865ca0b4438dc1/pip-19.0.tar.gz#sha256=c82bf8bc00c5732f0dd49ac1dea79b6242a1bd42a5012e308ed4f04369b17e54 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0
> Found link https://files.pythonhosted.org/packages/46/dc/7fd5df840efb3e56c8b4f768793a237ec4ee59891959d6a215d63f727023/pip-19.0.1-py2.py3-none-any.whl#sha256=aae79c7afe895fb986ec751564f24d97df1331bb99cdfec6f70dada2f40c0044 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0.1
> Found link https://files.pythonhosted.org/packages/c8/89/ad7f27938e59db1f0f55ce214087460f65048626e2226531ba6cb6da15f0/pip-19.0.1.tar.gz#sha256=e81ddd35e361b630e94abeda4a1eddd36d47a90e71eb00f38f46b57f787cd1a5 (from https://pypi.org/simple/pip/)(requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0.1
> Found link https://files.pythonhosted.org/packages/d7/41/34dd96bd33958e52cb4da2f1bf0818e396514fd4f4725a79199564cd0c20/pip-19.0.2-py2.py3-none-any.whl#sha256=6a59f1083a63851aeef60c7d68b119b46af11d9d803ddc1cf927b58edcd0b312 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0.2
> Found link https://files.pythonhosted.org/packages/4c/4d/88bc9413da11702cbbace3ccc51350ae099bb351febae8acc85fec34f9af/pip-19.0.2.tar.gz#sha256=f851133f8b58283fa50d8c78675eb88d4ff4cde29b6c41205cd938b06338e0e5 (from https://pypi.org/simple/pip/)(requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0.2
> Found link https://files.pythonhosted.org/packages/d8/f3/413bab4ff08e1fc4828dfc59996d721917df8e8583ea85385d51125dceff/pip-19.0.3-py2.py3-none-any.whl#sha256=bd812612bbd8ba84159d9ddc0266b7fbce712fc9bc98c82dee5750546ec8ec64 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0.3
> Found link https://files.pythonhosted.org/packages/36/fa/51ca4d57392e2f69397cd6e5af23da2a8d37884a605f9e3f2d3bfdc48397/pip-19.0.3.tar.gz#sha256=6e6f197a1abfb45118dbb878b5c859a0edbdd33fd250100bc015b67fded4b9f2 (from https://pypi.org/simple/pip/)(requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.0.3
> Found link https://files.pythonhosted.org/packages/f9/fb/863012b13912709c13cf5cfdbfb304fa6c727659d6290438e1a88df9d848/pip-19.1-py2.py3-none-any.whl#sha256=8f59b6cf84584d7962d79fd1be7a8ec0eb198aa52ea864896551736b3614eee9 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.1
> Found link https://files.pythonhosted.org/packages/51/5f/802a04274843f634469ef299fcd273de4438386deb7b8681dd059f0ee3b7/pip-19.1.tar.gz#sha256=d9137cb543d8a4d73140a3282f6d777b2e786bb6abb8add3ac5b6539c82cd624 (from https://pypi.org/simple/pip/) (requires-python:>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*), version: 19.1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/588/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/587 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/587/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/587/comments | https://api.github.com/repos/huggingface/transformers/issues/587/events | https://github.com/huggingface/transformers/issues/587 | 440,595,821 | MDU6SXNzdWU0NDA1OTU4MjE= | 587 | From which layer is fine tuning starting in BERT? | {
"login": "kbulutozler",
"id": 34663649,
"node_id": "MDQ6VXNlcjM0NjYzNjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/34663649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kbulutozler",
"html_url": "https://github.com/kbulutozler",
"followers_url": "https://api.github.com/users/kbulutozler/followers",
"following_url": "https://api.github.com/users/kbulutozler/following{/other_user}",
"gists_url": "https://api.github.com/users/kbulutozler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kbulutozler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kbulutozler/subscriptions",
"organizations_url": "https://api.github.com/users/kbulutozler/orgs",
"repos_url": "https://api.github.com/users/kbulutozler/repos",
"events_url": "https://api.github.com/users/kbulutozler/events{/privacy}",
"received_events_url": "https://api.github.com/users/kbulutozler/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"When BERT is fine-tuned, all layers are trained - this is quite different from fine-tuning in a lot of other ML models, but it matches what was described in the paper and works quite well (as long as you only fine-tune for a few epochs - it's very easy to overfit if you fine-tune the whole model for a long time on a small amount of data!)",
"Thank you. May I ask what the difference of this approach is from pre-training? ",
"The original model weights are used for initialization, whereas for a model trained from scratch, the weights are initialized randomly.",
"Thank you, I got it now.",
"Came upon this when searching for an answer to a related question. \r\n\r\nWhen adding a dense layer on top for a classification task, do the model weights for BERT get updated or only the dense layer(are the BERT layers frozen or unfrozen during training)? I ask b/c when training a classifier on the stack overflow tags dataset which contains 40.000 posts with tags in 20 classes I got some unusual results. I trained base-uncased and base-cased and what is weird is that after the first epoch, the test set prediction remain the same. By that I mean exactly the same. In other words, with a 80/20 split (32.000 posts in train set / 8.000 posts in test set) it doesn't matter if you are doing 1, 2 or 3 epochs, the test set prediction don't change. It stays at 83.875% for uncased and 83.224% for cased. The weird thing is that the training loss goes down. \r\n\r\nI have put the actual predictions in a pandas dataframe and the predictions in epoch 1, 2 and 3 are exactly the same.\r\n\r\n",
"When a classifier is trained, all the model weights get updated, not just the weights in the classifier layer, so I would expect some overfitting if you train on a small labelled dataset for a lot of epochs. \r\nThe behaviour you've described is unusual - have you tried varying the learning rate, or making a much smaller training set, training on it for 1 epoch only and seeing what the results look like? It might indicate a problem with the data.",
"That's what I thought. I tested training the uncased version with 20% of the dataset (training set 6400 and testing set 1600) which gave me an eval accuracy of 0.76875 after epoch 1 and 2. The eval loss is even the excact same value ( 0.7407131800800562 )\r\n\r\nI ran eval before starting the training which gave an accuracy of 0.05 which makes sense with 20 classes and random weights. Then after epoch 1 it jumps up to aforementioned values and stays the same in epoch 2 and 3.\r\n\r\nAny pointers on how to debug this? Might it help checking the gradients?",
"Yeah, that's where I'd look. If forced to guess, I'd say the network isn't really getting any input, and is therefore just learning the bias in the last layer. So you could try inspecting the data batches in the training loop right before they enter the network, to make sure there's actually data there and that the data matches the labels, and also checking that most network parameters are getting some gradient after each batch. If your code is in a repo somewhere, feel free to link it and I'll take a look.",
"I went through the training data and it appears that its formatted the right way. I also checked the gradients and they are adjusted after each back() call. I think this might be related to the warm_up part of the adjustable learning rate.\r\n\r\nIt happens after epoch 3:\r\n\r\n<img width=\"413\" alt=\"Screen Shot 2019-05-17 at 20 13 33\" src=\"https://user-images.githubusercontent.com/3185711/57953675-49f04000-78e0-11e9-8976-a5367bc5b0f3.png\">\r\n\r\nThen I also get a warning: \r\n\r\n05/17/2019 20:15:02 - WARNING - pytorch_pretrained_bert.optimization - Training beyond specified 't_total'. Learning rate multiplier set to 0.0. Please set 't_total' of WarmupLinearSchedule correctly.\r\n\r\nI am using a default value of 0.1. I plotted the learning rate over 4 epochs and in epoch 4 the learning rate becomes negative: \r\n\r\n<img width=\"436\" alt=\"Screen Shot 2019-05-17 at 21 03 27\" src=\"https://user-images.githubusercontent.com/3185711/57956376-482a7a80-78e8-11e9-987b-657198f19ef5.png\">\r\n\r\n\r\n3 epochs is more then enough for this dataset as it starts to overfit quickly. I just want to understand why this happens, it doesn't make sense to me. The loss and accuracy in the evaluation phase is exactly the same(and the training loss drops in epoch no 4 when the LR is negative). I put the code on Kaggle if you want to take a look ( no pressure :-) )\r\n\r\nhttps://www.kaggle.com/stoddur/bert-so-classification-test/edit\r\n\r\nIm going to play a bit with the warm_up function and see which learning rates are set with different values. Will let you know if I find out anything else.",
"In the BERT paper, and in this repo, the learning rate is 'warmed up' from 0 to the maximum over the first 10% of training, and then linearly decays back to 0 for the remaining 90% of training. In order for that to work, the learning rate scheduler needs to know how many steps there will be in training in total (i.e. steps_per_epoch * num_epochs). It seems like that value is being passed incorrectly, causing the LR to decay to zero too quickly and therefore freezing all the weights.\r\n\r\nAlso, I can't see the code at your link - is it possibly private?",
"Yeah, I noticed that now reading through the paper :)\r\n\r\nMade the kernel public, the code is a bit rough, hope it makes sense to you. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I get error below while running the program.. Did I do any mistake?\r\n\r\n warmup_linear = WarmupLinearSchedule( warmup=args.warmup_proportion,\r\n t_total=num_train_optimization_steps)\r\n\r\nlr_this_step = args.learning_rate * warmup_linear.get_lr(num_train_optimization_steps, \r\n args.warmup_proportion)\r\n\r\n \r\n\r\n**WARNING - pytorch_pretrained_bert.optimization - Training beyond specified 't_total'. Learning rate multiplier set to 0.0. Please set 't_total' of WarmupLinearSchedule correctly.**\r\n\r\n\r\n",
"@kbulutozler \r\n\r\n\r\n@steindor .. did you solve WarmupLinearSchedule issue ? I am getting same error .. I tried your kaggle code but getting error that \" the link does not exists\"\r\n\r\nI get error below while running the program.. Did I do any mistake?\r\n\r\nwarmup_linear = WarmupLinearSchedule( warmup=args.warmup_proportion,\r\nt_total=num_train_optimization_steps)\r\n\r\nlr_this_step = args.learning_rate * warmup_linear.get_lr(num_train_optimization_steps,\r\nargs.warmup_proportion)\r\n\r\nWARNING - pytorch_pretrained_bert.optimization - Training beyond specified 't_total'. Learning rate multiplier set to 0.0. Please set 't_total' of WarmupLinearSchedule correctly."
] | 1,557 | 1,568 | 1,563 | NONE | null | Hi, I looked at the code but couldn't manage to understand the layer from which BERT is being fine tuned. I am using simple_lm_finetuning.py function. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/587/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/586 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/586/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/586/comments | https://api.github.com/repos/huggingface/transformers/issues/586/events | https://github.com/huggingface/transformers/issues/586 | 440,562,056 | MDU6SXNzdWU0NDA1NjIwNTY= | 586 | Padding Token in Transformer XL | {
"login": "sb1992",
"id": 10261100,
"node_id": "MDQ6VXNlcjEwMjYxMTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/10261100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sb1992",
"html_url": "https://github.com/sb1992",
"followers_url": "https://api.github.com/users/sb1992/followers",
"following_url": "https://api.github.com/users/sb1992/following{/other_user}",
"gists_url": "https://api.github.com/users/sb1992/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sb1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sb1992/subscriptions",
"organizations_url": "https://api.github.com/users/sb1992/orgs",
"repos_url": "https://api.github.com/users/sb1992/repos",
"events_url": "https://api.github.com/users/sb1992/events{/privacy}",
"received_events_url": "https://api.github.com/users/sb1992/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"For these causal models that consider the left-context only, it's ok not to worry too much about padding since the attention modules only look to the previous tokens. Just be careful when you compute the loss to ignore the out-of-sentence-tokens (using loss functions `ignore_index` for instance).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, I have a related doubt. In the example code [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py), GPT-2 and transformer-xl throw error due to lacking padding token.\r\n\r\nPlease check recent comments in this [Issue](https://github.com/huggingface/transformers/issues/3021)"
] | 1,557 | 1,588 | 1,562 | NONE | null | I have sentences of varying lengths and I was wondering how to handle that as I could not see any padding token present. The index 0 refers to <eos> in the vocab, so any help on addition of padding would be appreciated | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/586/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/585 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/585/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/585/comments | https://api.github.com/repos/huggingface/transformers/issues/585/events | https://github.com/huggingface/transformers/pull/585 | 440,463,287 | MDExOlB1bGxSZXF1ZXN0Mjc2MDA4ODEy | 585 | Make the epsilon of LayerNorm configurable. | {
"login": "huntzhan",
"id": 5213906,
"node_id": "MDQ6VXNlcjUyMTM5MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5213906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huntzhan",
"html_url": "https://github.com/huntzhan",
"followers_url": "https://api.github.com/users/huntzhan/followers",
"following_url": "https://api.github.com/users/huntzhan/following{/other_user}",
"gists_url": "https://api.github.com/users/huntzhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huntzhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huntzhan/subscriptions",
"organizations_url": "https://api.github.com/users/huntzhan/orgs",
"repos_url": "https://api.github.com/users/huntzhan/repos",
"events_url": "https://api.github.com/users/huntzhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/huntzhan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok, good to go, thanks @huntzhan!"
] | 1,557 | 1,557 | 1,557 | CONTRIBUTOR | null | It would be great if we could configure `eps` in layer normalization since model like ERNIE uses `eps=1e-5` instead of `1e-12`.
#514 related | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/585/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/585",
"html_url": "https://github.com/huggingface/transformers/pull/585",
"diff_url": "https://github.com/huggingface/transformers/pull/585.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/585.patch",
"merged_at": 1557327399000
} |
https://api.github.com/repos/huggingface/transformers/issues/584 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/584/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/584/comments | https://api.github.com/repos/huggingface/transformers/issues/584/events | https://github.com/huggingface/transformers/issues/584 | 440,378,162 | MDU6SXNzdWU0NDAzNzgxNjI= | 584 | The number of train examples in STS-B is only 5749 | {
"login": "Dawn90",
"id": 30382717,
"node_id": "MDQ6VXNlcjMwMzgyNzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/30382717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dawn90",
"html_url": "https://github.com/Dawn90",
"followers_url": "https://api.github.com/users/Dawn90/followers",
"following_url": "https://api.github.com/users/Dawn90/following{/other_user}",
"gists_url": "https://api.github.com/users/Dawn90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dawn90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dawn90/subscriptions",
"organizations_url": "https://api.github.com/users/Dawn90/orgs",
"repos_url": "https://api.github.com/users/Dawn90/repos",
"events_url": "https://api.github.com/users/Dawn90/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dawn90/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,557 | 1,562 | 1,562 | NONE | null | Hi,
Thanks a lot for the amazing work!
Here's my issue:
When I run the './example/run_classification.py' with task STS-B, I found the train example number is only 5749, less than 7k which was reported in the paper ([paper link](https://www.nyu.edu/projects/bowman/glue.pdf)).
Thanks again!
Best,
Dong | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/584/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/583 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/583/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/583/comments | https://api.github.com/repos/huggingface/transformers/issues/583/events | https://github.com/huggingface/transformers/issues/583 | 440,288,169 | MDU6SXNzdWU0NDAyODgxNjk= | 583 | BERT + PyTorch + XLA | {
"login": "snakers4",
"id": 12515440,
"node_id": "MDQ6VXNlcjEyNTE1NDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/12515440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/snakers4",
"html_url": "https://github.com/snakers4",
"followers_url": "https://api.github.com/users/snakers4/followers",
"following_url": "https://api.github.com/users/snakers4/following{/other_user}",
"gists_url": "https://api.github.com/users/snakers4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/snakers4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/snakers4/subscriptions",
"organizations_url": "https://api.github.com/users/snakers4/orgs",
"repos_url": "https://api.github.com/users/snakers4/repos",
"events_url": "https://api.github.com/users/snakers4/events{/privacy}",
"received_events_url": "https://api.github.com/users/snakers4/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Do u mean this one? [link](https://news.developer.nvidia.com/nvidia-achieves-4x-speedup-on-bert-neural-network/)",
"No, I mean this repo\nhttps://github.com/pytorch/xla/tree/master\n\nLooks like Facebook and Google want to make pytorch on TPU\n\n\nOn May 6, 2019 9:14:25 AM GMT+03:00, chunbo dai <[email protected]> wrote:\n>Do u mean this one?\n>[link](https://news.developer.nvidia.com/nvidia-achieves-4x-speedup-on-bert-neural-network/)\n>\n>-- \n>You are receiving this because you authored the thread.\n>Reply to this email directly or view it on GitHub:\n>https://github.com/huggingface/pytorch-pretrained-BERT/issues/583#issuecomment-489510379\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | Hi,
Many thanks for your amazing library!
Even though no models were shared for Russian, we used your interfaces with success when doing some [research](https://towardsdatascience.com/complexity-generalization-computational-cost-in-nlp-modeling-of-morphologically-rich-languages-7fa2c0b45909).
Anyway here is my question.
Did you try [this](https://github.com/pytorch/xla/tree/master)?
At least on the surface it looks like they boast PyTorch + TPU.
Would also be cool to know if anyone had experience in running anything with XLA.
Many thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/583/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/582 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/582/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/582/comments | https://api.github.com/repos/huggingface/transformers/issues/582/events | https://github.com/huggingface/transformers/issues/582 | 440,262,027 | MDU6SXNzdWU0NDAyNjIwMjc= | 582 | Add GPT-2 Bigger Model | {
"login": "Eric-Wallace",
"id": 11711825,
"node_id": "MDQ6VXNlcjExNzExODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/11711825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Eric-Wallace",
"html_url": "https://github.com/Eric-Wallace",
"followers_url": "https://api.github.com/users/Eric-Wallace/followers",
"following_url": "https://api.github.com/users/Eric-Wallace/following{/other_user}",
"gists_url": "https://api.github.com/users/Eric-Wallace/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Eric-Wallace/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Eric-Wallace/subscriptions",
"organizations_url": "https://api.github.com/users/Eric-Wallace/orgs",
"repos_url": "https://api.github.com/users/Eric-Wallace/repos",
"events_url": "https://api.github.com/users/Eric-Wallace/events{/privacy}",
"received_events_url": "https://api.github.com/users/Eric-Wallace/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"For convenience to others, here's the config file for 345M:\r\n\r\n```\r\n{\r\n \"initializer_range\": 0.02,\r\n \"layer_norm_epsilon\": 1e-05,\r\n \"n_ctx\": 1024,\r\n \"n_embd\": 1024,\r\n \"n_head\": 16,\r\n \"n_layer\": 24,\r\n \"n_positions\": 1024,\r\n \"vocab_size\": 50257\r\n}\r\n```",
"Here are the concrete steps if you'd like to run the 345M.\r\n\r\nGrab OpenAI's download script from here https://github.com/openai/gpt-2/blob/master/download_model.py. and then run `python download_model.py 345M` to get the model checkpoint.\r\n\r\nThen use the conversion script here https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_gpt2_checkpoint_to_pytorch.py using `python convert_gpt2_checkpoint_to_pytorch.py --gpt2_checkpoint_path gpt2_checkpoint_folder --gpt2_config_file config_file --pytorch_dump_folder_path output_dir`\r\n\r\nwhere config_file is the json posted by @daemon above. \r\n\r\nThen inside https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling_gpt2.py modify the PRETRAINED_MODEL_ARCHIVE_MAP and PRETRAINED_CONFIG_ARCHIVE_MAP to point to the converted pytorch file\r\n\r\n",
"Thanks!\r\n\r\n> Then inside https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling_gpt2.py modify the PRETRAINED_MODEL_ARCHIVE_MAP and PRETRAINED_CONFIG_ARCHIVE_MAP to point to the converted pytorch file\r\n\r\nOr GPT2LMHeadModel.from_pretrained(pytorch_dump_folder_path) without changing modeling_gpt2.py?",
"Why not add this in the module?\r\n\r\nThanks for the instruction, I will likely try if its not integrated soon.",
"When running \"convert_gpt2_checkpoint_to_pytorch.py --gpt2_checkpoint_path gpt2_checkpoint_folder --gpt2_config_file config_file --pytorch_dump_folder_path output_dir\" I get the following error:\r\n\r\n_runfile('C:/Users/nietop1/Desktop/anaconda/trying to generate text/convert_checkpoint_gtp2.py', wdir='C:/Users/nietop1/Desktop/anaconda/trying to generate text')\r\nConverting TensorFlow checkpoint from C:\\Users\\nietop1\\Desktop\\anaconda\\models\\345M\r\nTraceback (most recent call last):\r\n\r\n File \"<ipython-input-32-bd0ca7f018f3>\", line 1, in <module>\r\n runfile('C:/Users/nietop1/Desktop/anaconda/trying to generate text/convert_checkpoint_gtp2.py', wdir='C:/Users/nietop1/Desktop/anaconda/trying to generate text')\r\n\r\n File \"C:\\Anaconda3\\envs\\tensorflow\\lib\\site-packages\\spyder\\utils\\site\\sitecustomize.py\", line 705, in runfile\r\n execfile(filename, namespace)\r\n\r\n File \"C:\\Anaconda3\\envs\\tensorflow\\lib\\site-packages\\spyder\\utils\\site\\sitecustomize.py\", line 102, in execfile\r\n exec(compile(f.read(), filename, 'exec'), namespace)\r\n\r\n File \"C:/Users/nietop1/Desktop/anaconda/trying to generate text/convert_checkpoint_gtp2.py\", line 81, in <module>\r\n 'C:/Users/nietop1/Desktop/anaconda/models/345M')\r\n\r\n File \"C:/Users/nietop1/Desktop/anaconda/trying to generate text/convert_checkpoint_gtp2.py\", line 47, in convert_gpt2_checkpoint_to_pytorch\r\n load_tf_weights_in_gpt2(model, gpt2_checkpoint_path)\r\n\r\n File \"C:\\Anaconda3\\envs\\tensorflow\\lib\\site-packages\\pytorch_pretrained_bert\\modeling_gpt2.py\", line 60, in load_tf_weights_in_gpt2\r\n init_vars = tf.train.list_variables(tf_path)\r\n\r\nAttributeError: module 'tensorflow.python.training.training' has no attribute 'list_variables'_\r\n\r\nHow can this be solved?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Closing this because this is merged."
] | 1,556 | 1,563 | 1,563 | NONE | null | OpenAI just release the next biggest version of their language model. I think to add the new model, one needs to use the conversion script from TF to Pytorch and then save the model as another option in PRETRAINED_MODEL_ARCHIVE_MAP. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/582/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/582/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/581 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/581/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/581/comments | https://api.github.com/repos/huggingface/transformers/issues/581/events | https://github.com/huggingface/transformers/issues/581 | 440,218,813 | MDU6SXNzdWU0NDAyMTg4MTM= | 581 | BertAdam gradient clipping is not global | {
"login": "raulpuric",
"id": 9101033,
"node_id": "MDQ6VXNlcjkxMDEwMzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9101033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raulpuric",
"html_url": "https://github.com/raulpuric",
"followers_url": "https://api.github.com/users/raulpuric/followers",
"following_url": "https://api.github.com/users/raulpuric/following{/other_user}",
"gists_url": "https://api.github.com/users/raulpuric/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raulpuric/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raulpuric/subscriptions",
"organizations_url": "https://api.github.com/users/raulpuric/orgs",
"repos_url": "https://api.github.com/users/raulpuric/repos",
"events_url": "https://api.github.com/users/raulpuric/events{/privacy}",
"received_events_url": "https://api.github.com/users/raulpuric/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | Just took a look at the gradient clipping algorithm used in: https://github.com/huggingface/pytorch-pretrained-BERT/blob/3ae8c8be1e3fc770968cd3fdb3b643e0b166e540/pytorch_pretrained_bert/optimization.py#L270
It's clipping gradients to a local norm of 1. It should be clipping gradients to a global norm of 1 as in https://github.com/google-research/bert/blob/master/optimization.py#L74 or in https://github.com/NVIDIA/Megatron-LM/blob/master/pretrain_bert.py#L226 . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/581/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/580 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/580/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/580/comments | https://api.github.com/repos/huggingface/transformers/issues/580/events | https://github.com/huggingface/transformers/issues/580 | 440,142,794 | MDU6SXNzdWU0NDAxNDI3OTQ= | 580 | Bert for passage reranking | {
"login": "oisin-dolphin",
"id": 41286500,
"node_id": "MDQ6VXNlcjQxMjg2NTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/41286500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oisin-dolphin",
"html_url": "https://github.com/oisin-dolphin",
"followers_url": "https://api.github.com/users/oisin-dolphin/followers",
"following_url": "https://api.github.com/users/oisin-dolphin/following{/other_user}",
"gists_url": "https://api.github.com/users/oisin-dolphin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oisin-dolphin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oisin-dolphin/subscriptions",
"organizations_url": "https://api.github.com/users/oisin-dolphin/orgs",
"repos_url": "https://api.github.com/users/oisin-dolphin/repos",
"events_url": "https://api.github.com/users/oisin-dolphin/events{/privacy}",
"received_events_url": "https://api.github.com/users/oisin-dolphin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The `convert_tf_checkpoint_to_pytorch` script is made to convert the Google pre-trained weights in `BertForPretraining` model, you have to modify it to convert another type model.\r\n\r\nIn your case, you want to load the passage re-ranking model in a `BertForSequenceClassification` model which has the same structure (BERT + a classifier on top of the pooled output) as the NYU model.\r\n\r\nhere is a quick way to do that:\r\n- install pytorch-pretrained-bert from source so you can modify it\r\n- change https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py#L34 to initialize a `BertForSequenceClassification` model instead of the `BertForPreTraining` model in the conversion script.\r\n- the structure is not exactly identical so you need to ADD a line that say `pointer = getattr(pointer, 'cls')` in the TWO if-conditions related to `output_weights` and `output_bias` (between L89 and L90 and between L91 and L92 in modeling.py here: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L90 and https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L92).\r\n- this should let you convert the tensorflow model in a pytorch one using the scripts.",
"Thanks so much! Your comment saved me a lot of time. However there was a small issue I got around by just changing the tf variable names.\r\n\r\nFor anyone else out there the solution was\r\n* https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py#L34 CHANGE `model = BertForSequenceClassification(config, 2)`\r\n\r\n* https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L70 ADD\r\n``` \r\n if name in ['output_weights' , 'output_bias']:\r\n name = 'classifier/' + name\r\n ```\r\n\r\n",
"Hello @oisin-dolphin and @thomwolf \r\nI followed above suggestions but getting following error.\r\ntensorflow.python.framework.errors_impl.NotFoundError: Key classifier/output_bias not found in checkpoint\r\n\r\nAlso what is significance of following line of code\r\npointer = getattr(pointer, 'cls') \r\n\r\nPlease suggest.\r\n\r\nThanks\r\nMahesh",
"> The `convert_tf_checkpoint_to_pytorch` script is made to convert the Google pre-trained weights in `BertForPretraining` model, you have to modify it to convert another type model.\r\n> \r\n> In your case, you want to load the passage re-ranking model in a `BertForSequenceClassification` model which has the same structure (BERT + a classifier on top of the pooled output) as the NYU model.\r\n> \r\n> here is a quick way to do that:\r\n> \r\n> * install pytorch-pretrained-bert from source so you can modify it\r\n> * change https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py#L34 to initialize a `BertForSequenceClassification` model instead of the `BertForPreTraining` model in the conversion script.\r\n> * the structure is not exactly identical so you need to ADD a line that say `pointer = getattr(pointer, 'cls')` in the TWO if-conditions related to `output_weights` and `output_bias` (between L89 and L90 and between L91 and L92 in modeling.py here: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L90 and https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L92).\r\n> * this should let you convert the tensorflow model in a pytorch one using the scripts.\r\n\r\nI followed these instructions for the SequenceClassification model but I still end up getting the same error for 'BertForSequenceClassification' object has no attribute 'bias'.",
"Update for latest transformers, add modeling_bert.py:78: \r\n```python\r\n for name, array in zip(names, arrays):\r\n if name in ['output_weights', 'output_bias']:\r\n name = 'classifier/' + name\r\n```\r\nand convert_bert_original_tf_checkpoint_to_pytorch.py\r\n```python\r\nconfig.num_labels = 2\r\n print(\"Building PyTorch model from configuration: {}\".format(str(config)))\r\n model = BertForSequenceClassification(config)\r\n\r\n```",
"you are my lifesaver @pertschuk Thank you for the instructions",
"glad they helped @Soonhwan-Kwon. \r\n\r\nI used a similar reranking model as part of a project I just released which hooks in to Elasticsearch and reranks search results out of the box, [check it out]( https://medium.com/koursaros-ai/boost-search-api-performance-e-g-410868e82b22) if this sounds like it would be useful! repo: https://github.com/koursaros-ai/nboost ",
"You can create a subclass of `BertForSequenceClassification` and add `self.weight` and `self.bias` to the` __init__` method. Then instantiate your new class and it is ready to use it:\r\n\r\n```\r\nclass BertForPassageRanking(BertForSequenceClassification):\r\n def __init__(self, config):\r\n super().__init__(config)\r\n self.weight = torch.autograd.Variable(torch.ones(2, config.hidden_size),\r\n requires_grad=True)\r\n self.bias = torch.autograd.Variable(torch.ones(2), requires_grad=True)\r\n\r\n\r\nbert_ranking = BertForPassageRanking.from_pretrained(BERT_PASSAGE_RANKING_PATH,\r\n from_tf=True)\r\n```\r\n\r\n`BERT_PASSAGE_RANKING_PATH` is the path where your tf checkpoints files and config json file are stored. You will need to rename the files as follows:\r\n\r\n```\r\nconfig.json\r\nmodel.ckpt.index\r\nmodel.ckpt.meta\r\n```\r\n\r\nAnother option if you do not want to change the file names is to load the json config file with `BertConfig.from_json_file()` and then pass to `BertForPassageRanking.from_pretained()` the path + ckpt file name and the configuration that you have already loaded with `BertConfig.from_json_file()` .\r\n",
"I added passage pytorch msmarco reranking models to the huggingface / transformers bucket, no need for subclassing / modifications. \r\n\r\nhttps://huggingface.co/nboost",
"> I added passage pytorch msmarco reranking models to the huggingface / transformers bucket, no need for subclassing / modifications.\r\n> \r\n> https://huggingface.co/nboost\r\n\r\nHi, I have a question regarding the output of your models. In transformers library, the bert_base model (`transformers.BertModel` class) has as output a tuple, where the first element is the last hidden state and the 2nd element is the pooler output. The last hidden state is a tensor of size `(batch_size, sequence_length, hidden_dim)`. For example for a batch size of 64 and 512 tokens we obtain for BERT an output of size `(64x512x768)`. The pooler output has size `(batch_size, hidden_size)`. This output is obtained training a linear layer with tanh activation function which had as input the `CLS` token hidden state (last layer hidden-state of the first oken of the sequence). Those weights have been trained from the next sentence prediction.\r\n\r\nYour model follows similar structure, at least `nboost/pt-biobert-base-msmarco`. However, a passage re-ranking model is a sequence classification model. Basically, the passage re-ranking model proposed by https://github.com/nyu-dl/dl4marco-bert is the BERT model fine-tuned with a dense layer on top to learn to classify a sequence as relevant or not relevant. Their first element of the tuple output is a tensor of size `(batch_size, num_classes)`, where num_classes is two (whether the sequence to classify is a relevant document).\r\n\r\nHow should we use your model for passage re-ranking?\r\nThanks a lot",
"> > I added passage pytorch msmarco reranking models to the huggingface / transformers bucket, no need for subclassing / modifications.\r\n> > https://huggingface.co/nboost\r\n> \r\n> Hi, I have a question regarding the output of your models. In transformers library, the bert_base model (`transformers.BertModel` class) has as output a tuple, where the first element is the last hidden state and the 2nd element is the pooler output. The last hidden state is a tensor of size `(batch_size, sequence_length, hidden_dim)`. For example for a batch size of 64 and 512 tokens we obtain for BERT an output of size `(64x512x768)`. The pooler output has size `(batch_size, hidden_size)`. This output is obtained training a linear layer with tanh activation function which had as input the `CLS` token hidden state (last layer hidden-state of the first oken of the sequence). Those weights have been trained from the next sentence prediction.\r\n> \r\n> Your model follows similar structure, at least `nboost/pt-biobert-base-msmarco`. However, a passage re-ranking model is a sequence classification model. Basically, the passage re-ranking model proposed by https://github.com/nyu-dl/dl4marco-bert is the BERT model fine-tuned with a dense layer on top to learn to classify a sequence as relevant or not relevant. Their first element of the tuple output is a tensor of size `(batch_size, num_classes)`, where num_classes is two (whether the sequence to classify is a relevant document).\r\n> \r\n> How should we use your model for passage re-ranking?\r\n> Thanks a lot\r\n\r\nI found where was the problem. As pointed in the model's page (https://huggingface.co/nboost/pt-biobert-base-msmarco#) to load the model you have to do the following:\r\n\r\n`model = AutoModel.from_pretrained(\"nboost/pt-biobert-base-msmarco\")`\r\nThis creates as output a tuple where the first element is a tensor of size `(64x512x768)`.\r\n\r\nHowever, we should do the following, since our problem is a sequence classification:\r\n\r\n`model = AutoModelForSequenceClassification.from_pretrained(\"nboost/pt-biobert-base-msmarco\")`\r\nThis creates the correct output, a tuple where the first element is a tensor of size `(batch_size, num_classes)`\r\n\r\nI suggest to the authors to change the model info and model card in https://huggingface.co/nboost/pt-biobert-base-msmarco#, since it is little bit confusing",
"> You can create a subclass of `BertForSequenceClassification` and add `self.weight` and `self.bias` to the` __init__` method. Then instantiate your new class and it is ready to use it:\r\n> \r\n> ```\r\n> class BertForPassageRanking(BertForSequenceClassification):\r\n> def __init__(self, config):\r\n> super().__init__(config)\r\n> self.weight = torch.autograd.Variable(torch.ones(2, config.hidden_size),\r\n> requires_grad=True)\r\n> self.bias = torch.autograd.Variable(torch.ones(2), requires_grad=True)\r\n> \r\n> \r\n> bert_ranking = BertForPassageRanking.from_pretrained(BERT_PASSAGE_RANKING_PATH,\r\n> from_tf=True)\r\n> ```\r\n> \r\n> `BERT_PASSAGE_RANKING_PATH` is the path where your tf checkpoints files and config json file are stored. You will need to rename the files as follows:\r\n> \r\n> ```\r\n> config.json\r\n> model.ckpt.index\r\n> model.ckpt.meta\r\n> ```\r\n> \r\n> Another option if you do not want to change the file names is to load the json config file with `BertConfig.from_json_file()` and then pass to `BertForPassageRanking.from_pretained()` the path + ckpt file name and the configuration that you have already loaded with `BertConfig.from_json_file()` .\r\n\r\n\r\nThanks a lot. I was having the same question about 'nboost' and was trying this method. However, the output seems to change when I run the same code multiple times, even though i am in the eval mode. Do you have any hint about what I am doing wrong here?\r\n\r\n```\r\nbert_ranking = BertForPassageRanking.from_pretrained(BERT_PASSAGE_RANKING_PATH,\r\n from_tf=True)\r\n\r\ndummy_query = [\r\n 'Rutgers is a good university. I like my experience there.', \r\n \"Hello, my dog is cute. My cute dog is amazing.\",\r\n 'Florida is a nice place but tiger king may be better',\r\n]\r\n\r\ndummy_passage = [\r\n 'My cat is really cute but my dog is even better.',\r\n 'My cat is really cute but my dog is even better.',\r\n 'My cat is really cute but my dog is even better.',\r\n]\r\nbert_ranking.eval()\r\nwith torch.no_grad():\r\n for idx in range(len(dummy_query)):\r\n input_ids = torch.tensor(tokenizer.encode(text=dummy_query[idx], \\\r\n text_pair=dummy_passage[idx], add_special_tokens=True)).unsqueeze(0)\r\n outputs = bert_ranking(input_ids)\r\n print(outputs)\r\n```\r\n",
"> Thanks a lot. I was having the same question about 'nboost' and was trying this method. However, the output seems to change when I run the same code multiple times, even though i am in the eval mode. Do you have any hint about what I am doing wrong here?\r\n> \r\n> ```\r\n> bert_ranking = BertForPassageRanking.from_pretrained(BERT_PASSAGE_RANKING_PATH,\r\n> from_tf=True)\r\n> \r\n> dummy_query = [\r\n> 'Rutgers is a good university. I like my experience there.', \r\n> \"Hello, my dog is cute. My cute dog is amazing.\",\r\n> 'Florida is a nice place but tiger king may be better',\r\n> ]\r\n> \r\n> dummy_passage = [\r\n> 'My cat is really cute but my dog is even better.',\r\n> 'My cat is really cute but my dog is even better.',\r\n> 'My cat is really cute but my dog is even better.',\r\n> ]\r\n> bert_ranking.eval()\r\n> with torch.no_grad():\r\n> for idx in range(len(dummy_query)):\r\n> input_ids = torch.tensor(tokenizer.encode(text=dummy_query[idx], \\\r\n> text_pair=dummy_passage[idx], add_special_tokens=True)).unsqueeze(0)\r\n> outputs = bert_ranking(input_ids)\r\n> print(outputs)\r\n> ```\r\n\r\nSorry, I have no idea. Finally I am not using this approximation. I did not achieve good results for my purpose. Intead, I am using the model provided by nboost (https://huggingface.co/nboost/pt-tinybert-msmarco) and it works fine for me. Remember to load the model as follows:\r\n\r\n`model = AutoModelForSequenceClassification.from_pretrained(\"nboost/pt-tinybert-msmarco\")`\r\n\r\nI am using tinybert-msmarco, however you can use one of the following models:\r\n\r\n```\r\nnboost/pt-bert-base-uncased-msmarco\r\nnboost/pt-bert-large-msmarco\r\nnboost/pt-biobert-base-msmarco\r\nnboost/pt-tinybert-msmarco\r\n```",
"Hi, I have fine tuned a multilingual model, taken from hugging face, on the passage reranking task. Now I am facing difficulties with converting the tensorflow checkpoint to a pytorch model, so that I can use the model using `BertForSequenceClassification`.\r\nI am using the following conversion function, but I get this error \r\n\r\n ```\r\nFile \"<ipython-input-50-1e24e5635ec9>\", line 1, in <module>\r\n convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path)\r\n\r\n File \"<ipython-input-49-22827240b095>\", line 63, in convert_tf_checkpoint_to_pytorch\r\n assert pointer.shape == array.shape\r\n\r\n File \"/home/igli/anaconda3/envs/search-boost/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 593, in __getattr__\r\n raise AttributeError(\"'{}' object has no attribute '{}'\".format(\r\n\r\nAttributeError: 'LayerNorm' object has no attribute 'shape'\r\n```\r\n\r\nThe conversion method:\r\n```\r\ndef convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path):\r\n config_path = os.path.abspath(bert_config_file)\r\n tf_path = os.path.abspath(tf_checkpoint_path)\r\n print(\"Converting TensorFlow checkpoint from {} with config at {}\".format(tf_path, config_path))\r\n # Load weights from TF model\r\n init_vars = tf.train.list_variables(tf_path)\r\n names = []\r\n arrays = []\r\n for name, shape in init_vars:\r\n print(\"Loading TF weight {} with shape {}\".format(name, shape))\r\n array = tf.train.load_variable(tf_path, name)\r\n names.append(name)\r\n arrays.append(array)\r\n \r\n # Initialise PyTorch model\r\n config = BertConfig.from_json_file(bert_config_file)\r\n config.num_labels = 2\r\n\r\n print(\"Building PyTorch model from configuration: {}\".format(str(config)))\r\n model = BertForSequenceClassification()(config=config)\r\n\r\n \r\n for name, array in zip(names, arrays):\r\n if name in ['output_weights' , 'output_bias']:\r\n name = 'classifier/' + name\r\n name = name.split('/')\r\n # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v\r\n # which are not required for using pretrained model\r\n if name[-1] in [\"adam_v\", \"adam_m\"]:\r\n print(\"Skipping {}\".format(\"/\".join(name)))\r\n continue\r\n pointer = model\r\n \r\n for m_name in name: \r\n\r\n if re.fullmatch(r'[A-Za-z]+_\\d+', m_name):\r\n l = re.split(r'_(\\d+)', m_name)\r\n else:\r\n l = [m_name]\r\n if l[0] == 'kernel':\r\n pointer = getattr(pointer, 'weight')\r\n elif l[0] == 'output_bias':\r\n pointer = getattr(pointer, 'bias')\r\n pointer = getattr(pointer, 'cls')\r\n elif l[0] == 'output_weights':\r\n pointer = getattr(pointer, 'weight')\r\n pointer = getattr(pointer, 'cls') \r\n else:\r\n try:\r\n pointer = getattr(pointer, l[0])\r\n except:\r\n pass\r\n\r\n if len(l) >= 2:\r\n num = int(l[1])\r\n pointer = pointer[num]\r\n if m_name[-11:] == '_embeddings':\r\n pointer = getattr(pointer, 'weight')\r\n elif m_name == 'kernel':\r\n array = np.transpose(array)\r\n try:\r\n assert pointer.shape == array.shape\r\n except AssertionError as e:\r\n e.args += (pointer.shape, array.shape)\r\n raise\r\n #pass\r\n \r\n print(\"Initialize PyTorch weight {}\".format(name))\r\n array = np.array(array)\r\n print(array)\r\n print(type(array))\r\n pointer.data = torch.from_numpy(array)\r\n \r\n # Save pytorch-model\r\n print(\"Save PyTorch model to {}\".format(pytorch_dump_path))\r\n torch.save(model.state_dict(), pytorch_dump_path)\r\n```\r\nI have currently no clue, where the problem might be. Thanks in advanvce!",
"> Update for latest transformers, add modeling_bert.py:78:\r\n> \r\n> ```python\r\n> for name, array in zip(names, arrays):\r\n> if name in ['output_weights', 'output_bias']:\r\n> name = 'classifier/' + name\r\n> ```\r\n> \r\n> and convert_bert_original_tf_checkpoint_to_pytorch.py\r\n> \r\n> ```python\r\n> config.num_labels = 2\r\n> print(\"Building PyTorch model from configuration: {}\".format(str(config)))\r\n> model = BertForSequenceClassification(config)\r\n> ```\r\n\r\nAs of 26/Mar/2021, \r\n`modeling_bert.py:78` is now around `modeling_bert.py:118`\r\n`convert_bert_original_tf_checkpoint_to_pytorch.py` is now around `convert_bert_original_tf_checkpoint_to_pytorch.py:33`. BTW, don't forget `from transformers import BertForSequenceClassification`"
] | 1,556 | 1,634 | 1,557 | NONE | null | Hi I am currently trying to implement bert for passage reranking in pytorch. Here is the paper and github repo.
https://arxiv.org/abs/1901.04085
https://github.com/nyu-dl/dl4marco-bert
I've downloaded their bert large model checkpoint and bert config for the task the `convert_tf_checkpoint_to_pytorch` function seems to successfully extract the weights from tensorflow.
Then while initialising the pytorch model
```
Initialize PyTorch weight ['bert', 'pooler', 'dense', 'kernel']
Skipping bert/pooler/dense/kernel/adam_m
Skipping bert/pooler/dense/kernel/adam_v
Skipping global_step
```
```~/anaconda3/envs/new_fast_ai/lib/python3.7/site-packages/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py in convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path)
35
36 # Load weights from tf checkpoint
---> 37 load_tf_weights_in_bert(model, tf_checkpoint_path)
38
39 # Save pytorch-model
~/anaconda3/envs/new_fast_ai/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py in load_tf_weights_in_bert(model, tf_checkpoint_path)
88 pointer = getattr(pointer, 'weight')
89 elif l[0] == 'output_bias' or l[0] == 'beta':
---> 90 pointer = getattr(pointer, 'bias')
91 elif l[0] == 'output_weights':
92 pointer = getattr(pointer, 'weight')
~/anaconda3/envs/new_fast_ai/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
533 return modules[name]
534 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 535 type(self).__name__, name))
536
537 def __setattr__(self, name, value):
AttributeError: 'BertForPreTraining' object has no attribute 'bias'
```
I assume it is issues with the final layer
What is the best way for me to go about resolving this?
thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/580/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/579 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/579/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/579/comments | https://api.github.com/repos/huggingface/transformers/issues/579/events | https://github.com/huggingface/transformers/issues/579 | 440,135,852 | MDU6SXNzdWU0NDAxMzU4NTI= | 579 | Resetting current_random_doc and current_doc | {
"login": "crowegian",
"id": 14296792,
"node_id": "MDQ6VXNlcjE0Mjk2Nzky",
"avatar_url": "https://avatars.githubusercontent.com/u/14296792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/crowegian",
"html_url": "https://github.com/crowegian",
"followers_url": "https://api.github.com/users/crowegian/followers",
"following_url": "https://api.github.com/users/crowegian/following{/other_user}",
"gists_url": "https://api.github.com/users/crowegian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/crowegian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/crowegian/subscriptions",
"organizations_url": "https://api.github.com/users/crowegian/orgs",
"repos_url": "https://api.github.com/users/crowegian/repos",
"events_url": "https://api.github.com/users/crowegian/events{/privacy}",
"received_events_url": "https://api.github.com/users/crowegian/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hmm maybe @Rocketknight1 have an insight on this?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | In the class BERTDataset the two variables `self.current_random_doc` and `self.current_doc` are never reset to 0, even when the corpus is closed and reopened. Is it supposed to work this way? I'd think it would run into issues on a small corpus where one counter gets to the same document but the counter is different because it was opened a second time.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/3ae8c8be1e3fc770968cd3fdb3b643e0b166e540/examples/lm_finetuning/simple_lm_finetuning.py#L42-L231 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/579/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/578 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/578/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/578/comments | https://api.github.com/repos/huggingface/transformers/issues/578/events | https://github.com/huggingface/transformers/issues/578 | 440,040,888 | MDU6SXNzdWU0NDAwNDA4ODg= | 578 | "Easy" path for classifier training / pre-training | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I join this issue.\r\n\r\nAlso I have a question related to the p.3\r\n> Optionally, modify run_classifier.py to allow loading of fine-tuned BERT language models from the lm_finetuning/ scripts\r\n\r\n`finetune_on_pregenerated.py` script uses `BertForPreTraining` with 2 heads and this is like vanilla training from the original paper. But `run_classifier.py` uses `BertForSequenceClassification` which is being learned only for predicting labels, not masked tokens and isNextSeq. Am I right?\r\nIf so, how can I merge these two approaches? I want to fine-tune the pretrained bert for my dataset and also train a classifier on the top of it.\r\n\r\nThank you.",
"It's a good question and a good discussion @Rocketknight1.\r\n\r\nI think your suggestion of \"splitting\" the present repo in two by extracting the examples in a separate repo and refactoring them to have a better workflow is a good idea.\r\n\r\nAt the present stage, it's important to keep the core library stable as it is used in downstream libraries but the examples are an area where many people would like to contribute to the community and it would be nice to have a more flexible structure that allows such contributions more easily than the present monolithic setup.\r\n\r\nSo if you want to start a new repo splitting the \"usage-part\" of this repo it could be a very interesting idea I think. I'm also happy to help and host it close to the present repo if it's what you had in mind (maybe shoot me an email in that case).",
"Understood! I'm tight on time right now, but if I find time I'll try to build an interface like that and contact you to ensure we sync things up between the two repos.",
"Keeping the discussion open on this",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,563 | 1,563 | MEMBER | null | I've noticed quite a few issues from people outside research who want to fine-tune a pre-trained BERT model to solve a task they're working on, but there's a steep learning curve. Right now, the workflow for someone who wants to use this repo for a custom task is something like this:
1) Understand how DataProcessors work and write a custom DataProcessor (or read the code for the existing data processors and hack your training data into a format that works for them)
2) Understand the `examples/run_classifier.py` script and modify it to use the custom DataProcessor
3) Write a script (or another modification of `run_classifier.py`) that loads unlabelled data and performs inference
This is a lot of work, especially for people who aren't familiar with the codebase! The Tensor2Tensor TF BERT repo is even worse - it's even harder for newcomers to do anything without understanding the code in detail. But it's possible to make BERT accessible to a lot more people with just a few changes:
1) Make a generic DataProcessor and clearly describe the expected input format in the docs so that people don't have to read the code to understand it. For example, the GenericDataProcessor class could expect one training example per line, and one label per line in a separate file. We could also add a GenericPairedDataProcessor, where the classifier reads two sequences as input instead of just one (e.g. for entailment tasks).
2) Add an inference script that loads a saved model state file and performs classifications and writes a file of its predictions. It should read data using the same GenericDataProcessor class, but will not require a label file to be present. If labels are present, it can also write evaluation metrics like accuracy.
3) Optionally, modify `run_classifier.py` to allow loading of fine-tuned BERT language models from the `lm_finetuning/` scripts
4) Document the whole workflow with example calls so that people can use BERT and get state-of-the-art results without needing to read any code!
To make it even easier, we could add a functional interface so people could just call something like `pytorch_pretrained_bert.train_classifier()`
Do you think this is a good idea? Or is it too end-user focused - would it work better as a separate repo that used this one as a dependency, especially if this repo is moving away from being BERT-specific and turning into more of a large set of PyTorch Transformer model implementations? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/578/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/578/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/577 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/577/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/577/comments | https://api.github.com/repos/huggingface/transformers/issues/577/events | https://github.com/huggingface/transformers/issues/577 | 440,036,894 | MDU6SXNzdWU0NDAwMzY4OTQ= | 577 | GPT2 lm_labels masking using (-1) throws an index out of range | {
"login": "adigoryl",
"id": 31667817,
"node_id": "MDQ6VXNlcjMxNjY3ODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/31667817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adigoryl",
"html_url": "https://github.com/adigoryl",
"followers_url": "https://api.github.com/users/adigoryl/followers",
"following_url": "https://api.github.com/users/adigoryl/following{/other_user}",
"gists_url": "https://api.github.com/users/adigoryl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adigoryl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adigoryl/subscriptions",
"organizations_url": "https://api.github.com/users/adigoryl/orgs",
"repos_url": "https://api.github.com/users/adigoryl/repos",
"events_url": "https://api.github.com/users/adigoryl/events{/privacy}",
"received_events_url": "https://api.github.com/users/adigoryl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Maybe you get the error because of the position_ids that are most likely wrong. \r\n\r\nI believe positional ids are not needed - you can use this:\r\npredictions, past = model(tokens_tensor,position_ids=None token_type_ids=None, lm_labels=None, past=None)\r\n\r\nand the use the parameters you want at the place you want or leave None if you do not want to use anything.\r\n\r\n",
"With zeros, it is not padded, zeros are \"!\" not \"[PAD]\"",
"tokenizer.convert_tokens_to_ids('[PAD]') this results in 0. I am confused why it happens",
"Really? When I use decode tokenizer.decode(0) results in !.\r\n\r\ntokenizer.encode(x) = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(x)) [or should be], but when i try to use tokenizer.tokenize or tokenizer.convert_tokens_to_ids I am not sure which one, I get that its nto defined even tokenizer.encode works properly.",
"Does the 0 index actually stand for unknown tokens as well?\r\n`tokenizer.convert_tokens_to_ids([\"qrz\"]) ` where \"qrz\" is supposed to be an unknown word. This will give [0]\r\nBut `tokenizer.convert_ids_to_tokens([0])` gives [\"!\"].\r\n",
"That strange - we together get that ! == [PAD]",
"There was some issue fine-tuning GPT-2 with the master branch. This should now be fixed with the merge of #560 (which also add access to GPT-2 medium model).",
"But to actually answer the discussion on this issue (sorry I was reading to quickly), there is no padding token in GPT-2 vocabulary.\r\nSo either you manage to not pad (which is how the model is pretrained) or you need to add a new token to the vocabulary using the special_token functions. This method explained for instance in our blog post on fine-tuning GPT/GPT-2 here: https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313",
"Thanks. So we need to add the special token and then fine tune it?\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,566 | 1,566 | NONE | null | I am fine-tuning GPT2 model using the LMHead with a small number of special tokens.
GPT2 underlying transformer takes the whole input at once, thus, it's important to pad inputs of varying lengths to a fixed length. The GPT2 model library offers -1 to be used as the padding value:
> lm_labels: optional language modeling labels: torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [-1, 0, ..., vocab_size]. All labels set to -1 are ignored (masked), the loss is only computed for the labels set in [0, ..., vocab_size].
**When I pad the lm_labels using -1's, the library throws an error.** On the other hand, using any other positive value for the masking works but the value acts like a vocabulary piece and thus makes the input incorrect.
The other issue with **GPT2 is that the position_ids are compulsory**, whereas, the docs say they are optional.
Having an encoded and padded dataset like (not working case):
```
input_ids = torch.tensor([
[[50257, 1212, 318, 43086, 2420, 2420, 2420, 50257]],
[[50257, 1212, 318, 43086, 2420, 50257, 0, 0]]
])
position_ids = torch.tensor([
[[1, 1, 1, 1, 1, 1, 1, 1]],
[[1, 1, 1, 1, 1, 1, 0, 0]]
])
lm_labels = torch.tensor([
[[1212, 318, 43086, 2420, 2420, 2420, 50257, 50257]],
[[1212, 318, 43086, 2420, 2420, 2420, -1, -1]]
])
# Changing -1 padding makes the code work but also makes the input incorrect.
lm_labels = torch.tensor([
[[1212, 318, 43086, 2420, 2420, 2420, 50257, 50257]],
[[1212, 318, 43086, 2420, 50257, 50257, 0 , 0]]
])
```
@tholor Could you or someone from the team please fix the masking issue.
The full errors trace:
> File "/Users/aw678/PycharmProjects/BERT/gpt2_simplified.py", line 181, in main
losses, past = model(input_ids, position_ids, lm_labels, past=past)
File "/Users/aw678/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/Users/aw678/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling_gpt2.py", line 661, in forward
hidden_states, presents = self.transformer(input_ids, position_ids, token_type_ids, past)
File "/Users/aw678/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/Users/aw678/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling_gpt2.py", line 590, in forward
token_type_embeds = self.wte(token_type_ids)
File "/Users/aw678/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/Users/aw678/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 118, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/Users/aw678/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/torch/nn/functional.py", line 1455, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range at /Users/soumith/mc3build/conda-bld/pytorch_1549593514549/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:191
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/577/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/576 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/576/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/576/comments | https://api.github.com/repos/huggingface/transformers/issues/576/events | https://github.com/huggingface/transformers/issues/576 | 440,001,383 | MDU6SXNzdWU0NDAwMDEzODM= | 576 | key error when using run_classifier.py in predict mode, expecting label? | {
"login": "search4mahesh",
"id": 4182331,
"node_id": "MDQ6VXNlcjQxODIzMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4182331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/search4mahesh",
"html_url": "https://github.com/search4mahesh",
"followers_url": "https://api.github.com/users/search4mahesh/followers",
"following_url": "https://api.github.com/users/search4mahesh/following{/other_user}",
"gists_url": "https://api.github.com/users/search4mahesh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/search4mahesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/search4mahesh/subscriptions",
"organizations_url": "https://api.github.com/users/search4mahesh/orgs",
"repos_url": "https://api.github.com/users/search4mahesh/repos",
"events_url": "https://api.github.com/users/search4mahesh/events{/privacy}",
"received_events_url": "https://api.github.com/users/search4mahesh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | Hi,
I am getting key error when using run_classifier.py in predict mode.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py
At prediction time we don't have labels hence it gives key error.
run_squad example is good as it was having is_training flag.
Could you please suggest?
if output_mode == "classification":
label_id = label_map[example.label]
elif output_mode == "regression":
label_id = float(example.label)
else:
raise KeyError(output_mode)
Thanks
Mahesh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/576/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/575 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/575/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/575/comments | https://api.github.com/repos/huggingface/transformers/issues/575/events | https://github.com/huggingface/transformers/issues/575 | 439,963,432 | MDU6SXNzdWU0Mzk5NjM0MzI= | 575 | Different BERT representations when text is with and without single quotes | {
"login": "Radhikadua123",
"id": 16516248,
"node_id": "MDQ6VXNlcjE2NTE2MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/16516248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Radhikadua123",
"html_url": "https://github.com/Radhikadua123",
"followers_url": "https://api.github.com/users/Radhikadua123/followers",
"following_url": "https://api.github.com/users/Radhikadua123/following{/other_user}",
"gists_url": "https://api.github.com/users/Radhikadua123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Radhikadua123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Radhikadua123/subscriptions",
"organizations_url": "https://api.github.com/users/Radhikadua123/orgs",
"repos_url": "https://api.github.com/users/Radhikadua123/repos",
"events_url": "https://api.github.com/users/Radhikadua123/events{/privacy}",
"received_events_url": "https://api.github.com/users/Radhikadua123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,556 | 1,556 | 1,556 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/575/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/574 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/574/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/574/comments | https://api.github.com/repos/huggingface/transformers/issues/574/events | https://github.com/huggingface/transformers/issues/574 | 439,706,731 | MDU6SXNzdWU0Mzk3MDY3MzE= | 574 | understanding of the output from TransfoXLModel | {
"login": "cherepanovic",
"id": 10064548,
"node_id": "MDQ6VXNlcjEwMDY0NTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/10064548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cherepanovic",
"html_url": "https://github.com/cherepanovic",
"followers_url": "https://api.github.com/users/cherepanovic/followers",
"following_url": "https://api.github.com/users/cherepanovic/following{/other_user}",
"gists_url": "https://api.github.com/users/cherepanovic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cherepanovic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cherepanovic/subscriptions",
"organizations_url": "https://api.github.com/users/cherepanovic/orgs",
"repos_url": "https://api.github.com/users/cherepanovic/repos",
"events_url": "https://api.github.com/users/cherepanovic/events{/privacy}",
"received_events_url": "https://api.github.com/users/cherepanovic/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | the output of the TransfoXLModel has the size of [1, 3, 1024] if the Input has tree tokens.
`predictions, mems = model(tokens_tensor, mems=None)`
doc from code is
```
Outputs:
A tuple of (last_hidden_state, new_mems)
`last_hidden_state`: the encoded-hidden-states at the top of the model
as a torch.FloatTensor of size [batch_size, sequence_length, self.config.d_model]
`new_mems`: list (num layers) of updated mem states at the entry of each layer
each mem state is a torch.FloatTensor of size [self.config.mem_len, batch_size, self.config.d_model]
Note that the first two dimensions are transposed in `mems` with regards to `input_ids` and `target`
```
could the output be explained more precisely. I would be very grateful! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/574/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/573 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/573/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/573/comments | https://api.github.com/repos/huggingface/transformers/issues/573/events | https://github.com/huggingface/transformers/issues/573 | 439,694,293 | MDU6SXNzdWU0Mzk2OTQyOTM= | 573 | GPT2 doesn't accept inputs of varying tokens length (despite the padding at the end) | {
"login": "adigoryl",
"id": 31667817,
"node_id": "MDQ6VXNlcjMxNjY3ODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/31667817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adigoryl",
"html_url": "https://github.com/adigoryl",
"followers_url": "https://api.github.com/users/adigoryl/followers",
"following_url": "https://api.github.com/users/adigoryl/following{/other_user}",
"gists_url": "https://api.github.com/users/adigoryl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adigoryl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adigoryl/subscriptions",
"organizations_url": "https://api.github.com/users/adigoryl/orgs",
"repos_url": "https://api.github.com/users/adigoryl/repos",
"events_url": "https://api.github.com/users/adigoryl/events{/privacy}",
"received_events_url": "https://api.github.com/users/adigoryl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@thomwolf I was wondering what are your thought on this issue?",
"The bug in the library causing the index out of range error comes from masking (-1) the LM labels.\r\n\r\n> lm_labels: optional language modeling labels: torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [-1, 0, ..., vocab_size]. All labels set to -1 are ignored (masked), the loss is only computed for the labels set in [0, ..., vocab_size].\r\n\r\nWhen I pad the lm_labels to a fixed size using -1's, the library throws an error, on the other hand, using any other positive value works but acts like a vocabulary piece and thus makes the input incorrect.\r\n@thomwolf Could someone please fix this?",
"Yes there is a PR (#560) fixing a number of issues for fine-tuning GPT-2.\r\nShould be merged soon (this week hopefully).",
"Any update on #560? ",
"I guess that you should try add pading at the begining - it predicts next word not first so the padding should be added in front.\r\n[50258, 1212, 318, 617, 43086, 50258, 0] should be\r\n[0,50258, 1212, 318, 617, 43086, 50258]",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I've checked that outputs from right-padded input tensors and from no-padding input tensors are different. Personally, the latter makes little more sense. A module for input masking and its corresponding position embeddings need to be implemented. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,571 | 1,571 | NONE | null | I have noted a very strange behaviour in GPT2 and I can't figure out why this happens. In one case when all of the inputs in the dataset have the same token length, the training works, however, when only one of the inputs has a different token length, the library throws an error. This is very strange since before I feed the inputs into the model I have a method which takes care of the padding so that every input is of a fixed shape/length.
**The working case pseudocode:**
Note that the dataset is in tensor type and when fed to the model is in shape specified in docs (n_batch, input_len)
dataset = [
[50258, 1212, 318, 617, 43086, 2420, 50258],
[50258, 318, 1212 , 617, 43086, 2420, 50258],
[50258, 1212, 318, 617, 43086, 2420, 50258],
]
_(all of the inputs in the dataset have the same token length)_
**The not working case:**
dataset = [
[50258, 1212, 318, 617, 43086, 2420, 50258],
[50258, 1212, 318, 617, 43086, 50258, 0],
[50258, 1212, 318, 617, 43086, 2420, 50258],
]
_(e.g. the second input has a different number of tokens than the other two, however, is padded with a 0 so that all of the inputs are of the same size)_
In fact, when the dataset consists of the same token length but has extra padding at the end, also throws an error:
dataset = [
[50258, 1212, 318, 617, 43086, 2420, 50258, 0, 0],
[50258, 318, 1212 , 617, 43086, 2420, 50258, 0, 0],
[50258, 1212, 318, 617, 43086, 2420, 50258, 0, 0],
]
A toy example to replicate this error:
[gpt2_simplified.py.zip](https://github.com/huggingface/pytorch-pretrained-BERT/files/3139047/gpt2_simplified.py.zip)
The full error traceback is:
> Epoch: 0%| | 0/3 [00:00<?, ?it/s]
Training: 0%| | 0/2 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/local_path/gpt2_simplified.py", line 166, in <module>
main()
File "/local_path/**gpt2_simplified.py**", line 142, in main
**losses, past = model(input_ids, position_ids, lm_labels, past=past)**
File "/local_path/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/torch/nn/modules/**module.py**", line 489, in __call__
**result = self.forward**(*input, **kwargs)
File "/local_path/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/pytorch_pretrained_bert/**modeling_gpt2.py**", line 661, in forward
**hidden_states, presents = self.transformer(input_ids, position_ids, token_type_ids, past)**
File "/local_path/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/torch/nn/modules/**module.py**", line 489, in __call__
**result = self.forward**(*input, **kwargs)
File "/local_path/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/pytorch_pretrained_bert/**modeling_gpt2.py**", line 587, in forward
**position_embeds = self.wpe(position_ids)**
File "/local_path/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/torch/nn/modules/**module.py**", line 489, in __call__
**result = self.forward**(*input, **kwargs)
File "/local_path/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/torch/nn/modules/**sparse.py**", line 118, in forward
**self.norm_type, self.scale_grad_by_freq, self.sparse)**
File "/local_path/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/torch/nn/**functional.py**", line 1454, in embedding
**return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range at /Users/soumith/mc3build/conda-bld/pytorch_1549593514549/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:191**
Thanks for the help in advance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/573/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/572 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/572/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/572/comments | https://api.github.com/repos/huggingface/transformers/issues/572/events | https://github.com/huggingface/transformers/issues/572 | 439,546,931 | MDU6SXNzdWU0Mzk1NDY5MzE= | 572 | BERT pre-training using only domain specific text | {
"login": "nightowlcity",
"id": 50201930,
"node_id": "MDQ6VXNlcjUwMjAxOTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/50201930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nightowlcity",
"html_url": "https://github.com/nightowlcity",
"followers_url": "https://api.github.com/users/nightowlcity/followers",
"following_url": "https://api.github.com/users/nightowlcity/following{/other_user}",
"gists_url": "https://api.github.com/users/nightowlcity/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nightowlcity/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nightowlcity/subscriptions",
"organizations_url": "https://api.github.com/users/nightowlcity/orgs",
"repos_url": "https://api.github.com/users/nightowlcity/repos",
"events_url": "https://api.github.com/users/nightowlcity/events{/privacy}",
"received_events_url": "https://api.github.com/users/nightowlcity/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | BERT is pre-trained using Wikipedia and other sources of normal text, but my problem domain has a very specific vocabulary & grammar. Is there an easy way to train BERT completely from domain specific data (preferably using Keras)?
The amount of pre-training data is not issue and we are not looking for the SOTA results. We would do fine with a smaller scale model, but it has to be trained from our data. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/572/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/571 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/571/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/571/comments | https://api.github.com/repos/huggingface/transformers/issues/571/events | https://github.com/huggingface/transformers/pull/571 | 439,542,506 | MDExOlB1bGxSZXF1ZXN0Mjc1MzI0NjE1 | 571 | Fix documentation typo | {
"login": "MottoX",
"id": 6220861,
"node_id": "MDQ6VXNlcjYyMjA4NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6220861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MottoX",
"html_url": "https://github.com/MottoX",
"followers_url": "https://api.github.com/users/MottoX/followers",
"following_url": "https://api.github.com/users/MottoX/following{/other_user}",
"gists_url": "https://api.github.com/users/MottoX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MottoX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MottoX/subscriptions",
"organizations_url": "https://api.github.com/users/MottoX/orgs",
"repos_url": "https://api.github.com/users/MottoX/repos",
"events_url": "https://api.github.com/users/MottoX/events{/privacy}",
"received_events_url": "https://api.github.com/users/MottoX/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,556 | 1,557 | 1,557 | CONTRIBUTOR | null | Just fix some apparent documentation typos. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/571/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/571",
"html_url": "https://github.com/huggingface/transformers/pull/571",
"diff_url": "https://github.com/huggingface/transformers/pull/571.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/571.patch",
"merged_at": 1557324374000
} |
https://api.github.com/repos/huggingface/transformers/issues/570 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/570/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/570/comments | https://api.github.com/repos/huggingface/transformers/issues/570/events | https://github.com/huggingface/transformers/pull/570 | 439,538,398 | MDExOlB1bGxSZXF1ZXN0Mjc1MzIxMzAx | 570 | Create optimizer only when args.do_train is True | {
"login": "MottoX",
"id": 6220861,
"node_id": "MDQ6VXNlcjYyMjA4NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6220861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MottoX",
"html_url": "https://github.com/MottoX",
"followers_url": "https://api.github.com/users/MottoX/followers",
"following_url": "https://api.github.com/users/MottoX/following{/other_user}",
"gists_url": "https://api.github.com/users/MottoX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MottoX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MottoX/subscriptions",
"organizations_url": "https://api.github.com/users/MottoX/orgs",
"repos_url": "https://api.github.com/users/MottoX/repos",
"events_url": "https://api.github.com/users/MottoX/events{/privacy}",
"received_events_url": "https://api.github.com/users/MottoX/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, thanks @MottoX!"
] | 1,556 | 1,557 | 1,557 | CONTRIBUTOR | null | I am facing the same problem as #544 . When only setting args.do_eval to evaluate a trained model, there will be an error due to optimizer initialization. I think it is unnecessary to create an optimizer if args.do_train is False. Thanks for your review. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/570/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/570",
"html_url": "https://github.com/huggingface/transformers/pull/570",
"diff_url": "https://github.com/huggingface/transformers/pull/570.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/570.patch",
"merged_at": 1557324471000
} |
https://api.github.com/repos/huggingface/transformers/issues/569 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/569/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/569/comments | https://api.github.com/repos/huggingface/transformers/issues/569/events | https://github.com/huggingface/transformers/issues/569 | 439,365,268 | MDU6SXNzdWU0MzkzNjUyNjg= | 569 | License of the pretrained models | {
"login": "xuhdev",
"id": 325476,
"node_id": "MDQ6VXNlcjMyNTQ3Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/325476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xuhdev",
"html_url": "https://github.com/xuhdev",
"followers_url": "https://api.github.com/users/xuhdev/followers",
"following_url": "https://api.github.com/users/xuhdev/following{/other_user}",
"gists_url": "https://api.github.com/users/xuhdev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xuhdev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xuhdev/subscriptions",
"organizations_url": "https://api.github.com/users/xuhdev/orgs",
"repos_url": "https://api.github.com/users/xuhdev/repos",
"events_url": "https://api.github.com/users/xuhdev/events{/privacy}",
"received_events_url": "https://api.github.com/users/xuhdev/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Just found it's under Apache v2 in the Google bert repo. Closing."
] | 1,556 | 1,556 | 1,556 | CONTRIBUTOR | null | I noticed that once `from_pretrained` is called, the library automatically downloads a pretrained model from a URL. However, I found no license included in the downloaded pretrained model file. What is the license of the pretrained models? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/569/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/568 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/568/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/568/comments | https://api.github.com/repos/huggingface/transformers/issues/568/events | https://github.com/huggingface/transformers/issues/568 | 439,228,906 | MDU6SXNzdWU0MzkyMjg5MDY= | 568 | Fine-tuning Bert | {
"login": "goyalsaransh97",
"id": 26386379,
"node_id": "MDQ6VXNlcjI2Mzg2Mzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/26386379?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goyalsaransh97",
"html_url": "https://github.com/goyalsaransh97",
"followers_url": "https://api.github.com/users/goyalsaransh97/followers",
"following_url": "https://api.github.com/users/goyalsaransh97/following{/other_user}",
"gists_url": "https://api.github.com/users/goyalsaransh97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/goyalsaransh97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/goyalsaransh97/subscriptions",
"organizations_url": "https://api.github.com/users/goyalsaransh97/orgs",
"repos_url": "https://api.github.com/users/goyalsaransh97/repos",
"events_url": "https://api.github.com/users/goyalsaransh97/events{/privacy}",
"received_events_url": "https://api.github.com/users/goyalsaransh97/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I was facing the same issue when finetuning using `finetune_on_pregenerated.py`. The problem was in the fact that I have some empty sentences in my dataset. Also there are some special characters, like `\\t` (tabulation) which can make a mess and should be cleared. \r\nI preprocess the text like this:\r\n```\r\nfor_train = for_train.dropna(subset=['text'])\r\n\r\nimport re\r\nwith open('for_pretraining_full.txt', \"w\", encoding='utf-8') as writer:\r\n for doc in for_train['text'].tolist():\r\n doc = doc.replace(u'\\xa0', u' ').replace(u'\\u200b', u' ').replace(u'\\u206f', u' ').replace(u'\\u206e', u' ').replace(u'\\u206b', u' ').replace(u'\\u206c', u' ').replace(u'\\u2063', u' ').replace(u'\\u200d', u' ').strip() # replace some special unicode chars\r\n doc = re.sub('\\t+', '', doc) # replace tabs\r\n doc = doc.replace('. ', '\\n')\r\n doc = re.sub('\\n+( )*(\\n+)*', '\\n', doc) # replace several consecutive new lines by a single one\r\n doc = doc.strip()\r\n if (doc != ''):\r\n writer.write(doc)\r\n writer.write('\\n\\n')\r\n```\r\n\r\nAlso you can try to find problem sentances, debuging your `simple_lm_finetuning.py`.",
"I ran into an issue on this where some sneaky \\n were still sneaking through even with the above code. I remedied this by just doing \r\n\r\n`if \"\\n\" in doc:\r\n\r\n doc = re.sub('\\n', ' ', doc)`",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,565 | 1,565 | NONE | null | I want to fine-tune Bert's LM for a specific corpora. I converted the test into the format specified in the documentation and ran the fine-tuning codes given. I'm getting the following error:
File "simple_lm_finetuning.py", line 156, in random_sent
assert len(t2) > 0
AssertionError
I'm getting similar error in pregenerate_.... script. What could be the reason for the same? Is it due to some possible OOV words? My corpus does contain some emoticons.
Thanks in advance | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/568/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/567 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/567/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/567/comments | https://api.github.com/repos/huggingface/transformers/issues/567/events | https://github.com/huggingface/transformers/issues/567 | 439,115,855 | MDU6SXNzdWU0MzkxMTU4NTU= | 567 | about pytorch 1.1.0 rerlease | {
"login": "yeontaek",
"id": 22782221,
"node_id": "MDQ6VXNlcjIyNzgyMjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/22782221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeontaek",
"html_url": "https://github.com/yeontaek",
"followers_url": "https://api.github.com/users/yeontaek/followers",
"following_url": "https://api.github.com/users/yeontaek/following{/other_user}",
"gists_url": "https://api.github.com/users/yeontaek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yeontaek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yeontaek/subscriptions",
"organizations_url": "https://api.github.com/users/yeontaek/orgs",
"repos_url": "https://api.github.com/users/yeontaek/repos",
"events_url": "https://api.github.com/users/yeontaek/events{/privacy}",
"received_events_url": "https://api.github.com/users/yeontaek/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi,\r\n\r\nThe repo is compatible with PyTorch 1.1.0.\r\n\r\nBut, we probably won't switch to PyTorch Multi-headed-Attention module since this would mean refactoring all the models and adding complexity to the tensorflow conversion codes for unclear gains.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | Hi today pytorch 1.1.0 release(https://github.com/pytorch/pytorch/releases/tag/v1.1.0)
In version 1.1.0, added a new module implementing Multi-headed-Attention.
And various bugs have been modified.
Do you plan to update to suit that version?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/567/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/566 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/566/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/566/comments | https://api.github.com/repos/huggingface/transformers/issues/566/events | https://github.com/huggingface/transformers/issues/566 | 439,085,421 | MDU6SXNzdWU0MzkwODU0MjE= | 566 | Bug in run_classifier.py fp16 learning rate | {
"login": "dalek-who",
"id": 31960962,
"node_id": "MDQ6VXNlcjMxOTYwOTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/31960962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dalek-who",
"html_url": "https://github.com/dalek-who",
"followers_url": "https://api.github.com/users/dalek-who/followers",
"following_url": "https://api.github.com/users/dalek-who/following{/other_user}",
"gists_url": "https://api.github.com/users/dalek-who/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dalek-who/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dalek-who/subscriptions",
"organizations_url": "https://api.github.com/users/dalek-who/orgs",
"repos_url": "https://api.github.com/users/dalek-who/repos",
"events_url": "https://api.github.com/users/dalek-who/events{/privacy}",
"received_events_url": "https://api.github.com/users/dalek-who/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm asking the same question",
"I had been dealing with the issue of low and decreasing accuracy when I use fp16, as shown below,\r\n\r\n```\r\nEpoch 1 - Batch 1600/287417 - Training Acc. 0.106250 - Training Loss 2.295977\r\nEpoch 1 - Batch 3200/287417 - Training Acc. 0.098125 - Training Loss 2.299707\r\nEpoch 1 - Batch 4800/287417 - Training Acc. 0.094792 - Training Loss 2.304948\r\nEpoch 1 - Batch 6400/287417 - Training Acc. 0.093125 - Training Loss 2.307725\r\nEpoch 1 - Batch 8000/287417 - Training Acc. 0.092625 - Training Loss 2.305703\r\nEpoch 1 - Batch 9600/287417 - Training Acc. 0.091667 - Training Loss 2.306758\r\nEpoch 1 - Batch 11200/287417 - Training Acc. 0.092589 - Training Loss 2.306116\r\nEpoch 1 - Batch 12800/287417 - Training Acc. 0.092969 - Training Loss 2.307227\r\nEpoch 1 - Batch 14400/287417 - Training Acc. 0.091458 - Training Loss 2.310017\r\nEpoch 1 - Batch 16000/287417 - Training Acc. 0.091000 - Training Loss 2.308750\r\nEpoch 1 - Batch 17600/287417 - Training Acc. 0.090795 - Training Loss 2.309631\r\nEpoch 1 - Batch 19200/287417 - Training Acc. 0.090625 - Training Loss 2.310771\r\nEpoch 1 - Batch 20800/287417 - Training Acc. 0.090433 - Training Loss 2.310832\r\nEpoch 1 - Batch 22400/287417 - Training Acc. 0.090625 - Training Loss 2.311030\r\nEpoch 1 - Batch 24000/287417 - Training Acc. 0.090083 - Training Loss 2.311357\r\nEpoch 1 - Batch 25600/287417 - Training Acc. 0.089883 - Training Loss 2.311748\r\nEpoch 1 - Batch 27200/287417 - Training Acc. 0.089449 - Training Loss 2.312302\r\nEpoch 1 - Batch 28800/287417 - Training Acc. 0.088993 - Training Loss 2.312582\r\nEpoch 1 - Batch 30400/287417 - Training Acc. 0.088651 - Training Loss 2.313187\r\nEpoch 1 - Batch 32000/287417 - Training Acc. 0.088656 - Training Loss 2.313006\r\nEpoch 1 - Batch 33600/287417 - Training Acc. 0.088750 - Training Loss 2.313333\r\nEpoch 1 - Batch 35200/287417 - Training Acc. 0.088665 - Training Loss 2.314015\r\nEpoch 1 - Batch 36800/287417 - Training Acc. 0.088641 - Training Loss 2.313631\r\nEpoch 1 - Batch 38400/287417 - Training Acc. 0.088854 - Training Loss 2.313276\r\nEpoch 1 - Batch 40000/287417 - Training Acc. 0.089325 - Training Loss 2.312648\r\nEpoch 1 - Batch 41600/287417 - Training Acc. 0.089183 - Training Loss 2.312943\r\nEpoch 1 - Batch 43200/287417 - Training Acc. 0.089051 - Training Loss 2.312587\r\nEpoch 1 - Batch 44800/287417 - Training Acc. 0.088929 - Training Loss 2.313172\r\nEpoch 1 - Batch 46400/287417 - Training Acc. 0.088793 - Training Loss 2.312671\r\nEpoch 1 - Batch 48000/287417 - Training Acc. 0.088479 - Training Loss 2.313255\r\nEpoch 1 - Batch 49600/287417 - Training Acc. 0.088972 - Training Loss 2.312710\r\nEpoch 1 - Batch 51200/287417 - Training Acc. 0.088906 - Training Loss 2.312372\r\n```\r\n\r\nHowever, after I made the change in `lr_this_step` that you indicated, I've started to get normal results, as follows,\r\n\r\n```\r\nEpoch 1 - Batch 1600/287417 - Training Acc. 0.156250 - Training Loss 2.224727\r\nEpoch 1 - Batch 3200/287417 - Training Acc. 0.200937 - Training Loss 2.166289\r\nEpoch 1 - Batch 4800/287417 - Training Acc. 0.245833 - Training Loss 2.098184\r\nEpoch 1 - Batch 6400/287417 - Training Acc. 0.299063 - Training Loss 2.018706\r\nEpoch 1 - Batch 8000/287417 - Training Acc. 0.351625 - Training Loss 1.937730\r\nEpoch 1 - Batch 9600/287417 - Training Acc. 0.400833 - Training Loss 1.855378\r\nEpoch 1 - Batch 11200/287417 - Training Acc. 0.444018 - Training Loss 1.768468\r\nEpoch 1 - Batch 12800/287417 - Training Acc. 0.481875 - Training Loss 1.685869\r\nEpoch 1 - Batch 14400/287417 - Training Acc. 0.513889 - Training Loss 1.606483\r\nEpoch 1 - Batch 16000/287417 - Training Acc. 0.536937 - Training Loss 1.537816\r\nEpoch 1 - Batch 17600/287417 - Training Acc. 0.556364 - Training Loss 1.477735\r\nEpoch 1 - Batch 19200/287417 - Training Acc. 0.576146 - Training Loss 1.418323\r\nEpoch 1 - Batch 20800/287417 - Training Acc. 0.592019 - Training Loss 1.367327\r\nEpoch 1 - Batch 22400/287417 - Training Acc. 0.606429 - Training Loss 1.321059\r\nEpoch 1 - Batch 24000/287417 - Training Acc. 0.617542 - Training Loss 1.281488\r\nEpoch 1 - Batch 25600/287417 - Training Acc. 0.627109 - Training Loss 1.246746\r\nEpoch 1 - Batch 27200/287417 - Training Acc. 0.637500 - Training Loss 1.211883\r\nEpoch 1 - Batch 28800/287417 - Training Acc. 0.645938 - Training Loss 1.182604\r\nEpoch 1 - Batch 30400/287417 - Training Acc. 0.652204 - Training Loss 1.158571\r\nEpoch 1 - Batch 32000/287417 - Training Acc. 0.658875 - Training Loss 1.134463\r\nEpoch 1 - Batch 33600/287417 - Training Acc. 0.665179 - Training Loss 1.111719\r\nEpoch 1 - Batch 35200/287417 - Training Acc. 0.671023 - Training Loss 1.089363\r\nEpoch 1 - Batch 36800/287417 - Training Acc. 0.676848 - Training Loss 1.068860\r\nEpoch 1 - Batch 38400/287417 - Training Acc. 0.681536 - Training Loss 1.050721\r\nEpoch 1 - Batch 40000/287417 - Training Acc. 0.685775 - Training Loss 1.034663\r\nEpoch 1 - Batch 41600/287417 - Training Acc. 0.690361 - Training Loss 1.017672\r\nEpoch 1 - Batch 43200/287417 - Training Acc. 0.693866 - Training Loss 1.004058\r\nEpoch 1 - Batch 44800/287417 - Training Acc. 0.698013 - Training Loss 0.990084\r\nEpoch 1 - Batch 46400/287417 - Training Acc. 0.701552 - Training Loss 0.977086\r\nEpoch 1 - Batch 48000/287417 - Training Acc. 0.704854 - Training Loss 0.965735\r\nEpoch 1 - Batch 49600/287417 - Training Acc. 0.708266 - Training Loss 0.953387\r\nEpoch 1 - Batch 51200/287417 - Training Acc. 0.712012 - Training Loss 0.940919\r\nEpoch 1 - Batch 52800/287417 - Training Acc. 0.714697 - Training Loss 0.929941\r\n```\r\n\r\nThanks!",
"@burcturkoglu \r\nHow about performance comparison with fp32?",
"@yeontaek \r\nI trained BERT for classification with my own data by _run_classifier_ script.\r\n\r\nHere are the benchmarks for fp32 vs fp16 in both single Tesla V100 and in 4 Tesla V100 with _DataParallel_,\r\n\r\n*_fp32:_*\r\n\r\n- Single Tesla V100 - Training Duration 17,739 seconds\r\n\r\n- 4 Tesla V100 - Training Duration 9,342 seconds\r\n\r\n*_fp16:_*\r\n\r\n- Single Tesla V100 - Training Duration 12,297 seconds\r\n\r\n- 4 Tesla V100 - Training Duration 6,330 seconds\r\n\r\nIn both types of instances, it gives approximately 30% increase in speed without a change in accuracy. ",
"@burcturkoglu \r\nThank you so much. It was a big help. "
] | 1,556 | 1,557 | 1,557 | NONE | null | After the latest update, my learning rate of fp16 in run_classifier.py keeps increasing.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/2dee86319dbad575352358b8f2fb4129940e381a/examples/run_classifier.py#L857-L858
I think the right code is: lr_this_step = args.learning_rate * warmup_linear.get_lr(global_step, args.warmup_proportion).
https://github.com/huggingface/pytorch-pretrained-BERT/blob/2dee86319dbad575352358b8f2fb4129940e381a/pytorch_pretrained_bert/optimization.py#L53-L62
In this function, the first argument should be step, and global_step/num_train_optimization_steps makes process calculated again and when input process to WarmupLinearSchedule, it would be too small to decrease
https://github.com/huggingface/pytorch-pretrained-BERT/blob/2dee86319dbad575352358b8f2fb4129940e381a/pytorch_pretrained_bert/optimization.py#L162-L171
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/566/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/565 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/565/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/565/comments | https://api.github.com/repos/huggingface/transformers/issues/565/events | https://github.com/huggingface/transformers/issues/565 | 439,052,352 | MDU6SXNzdWU0MzkwNTIzNTI= | 565 | Results of Fine-tuned model changes in every run | {
"login": "cagrikymk",
"id": 15324155,
"node_id": "MDQ6VXNlcjE1MzI0MTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/15324155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cagrikymk",
"html_url": "https://github.com/cagrikymk",
"followers_url": "https://api.github.com/users/cagrikymk/followers",
"following_url": "https://api.github.com/users/cagrikymk/following{/other_user}",
"gists_url": "https://api.github.com/users/cagrikymk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cagrikymk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cagrikymk/subscriptions",
"organizations_url": "https://api.github.com/users/cagrikymk/orgs",
"repos_url": "https://api.github.com/users/cagrikymk/repos",
"events_url": "https://api.github.com/users/cagrikymk/events{/privacy}",
"received_events_url": "https://api.github.com/users/cagrikymk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | After I load the model with:
`
model = BertForNextSentencePrediction.from_pretrained("bert-base-uncased",state_dict=model_state_dict)
model.eval()
`
The prediction results are not stable. They change dractically in every run.
It gets stable if I fix the seed but I dont understand why we need that. Isnt the model supposed to be fixed since we are just evaluating? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/565/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/564 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/564/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/564/comments | https://api.github.com/repos/huggingface/transformers/issues/564/events | https://github.com/huggingface/transformers/pull/564 | 439,051,911 | MDExOlB1bGxSZXF1ZXN0Mjc0OTQ1MzM4 | 564 | Fix #537 | {
"login": "8enmann",
"id": 1021104,
"node_id": "MDQ6VXNlcjEwMjExMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1021104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/8enmann",
"html_url": "https://github.com/8enmann",
"followers_url": "https://api.github.com/users/8enmann/followers",
"following_url": "https://api.github.com/users/8enmann/following{/other_user}",
"gists_url": "https://api.github.com/users/8enmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/8enmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/8enmann/subscriptions",
"organizations_url": "https://api.github.com/users/8enmann/orgs",
"repos_url": "https://api.github.com/users/8enmann/repos",
"events_url": "https://api.github.com/users/8enmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/8enmann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for that @8enmann!"
] | 1,556 | 1,556 | 1,556 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/564/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/564",
"html_url": "https://github.com/huggingface/transformers/pull/564",
"diff_url": "https://github.com/huggingface/transformers/pull/564.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/564.patch",
"merged_at": 1556702327000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/563 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/563/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/563/comments | https://api.github.com/repos/huggingface/transformers/issues/563/events | https://github.com/huggingface/transformers/issues/563 | 438,999,408 | MDU6SXNzdWU0Mzg5OTk0MDg= | 563 | performance does not change but loss decrease | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | After training bert-lstm-crf model for 25 epoches, the performance on training set
here is the performance on train set, dev set and test set:
25th epoch:
tensor(10267.6279, device='cuda:0')
(0.42706720346856614, 0.4595134955014995, 0.4426966292134832)
(0.43147208121827413, 0.4271356783919598, 0.42929292929292934)
(0.4460093896713615, 0.4668304668304668, 0.4561824729891957)
26th epoch:
tensor(10219.3398, device='cuda:0')
(0.44544364508393286, 0.4951682772409197, 0.46899163642101943)
(0.4469135802469136, 0.4547738693467337, 0.45080946450809467)
(0.45871559633027525, 0.4914004914004914, 0.4744958481613286)
27 epoch:
tensor(10169.0742, device='cuda:0')
(0.44544364508393286, 0.4951682772409197, 0.46899163642101943)
(0.4469135802469136, 0.4547738693467337, 0.45080946450809467)
(0.45871559633027525, 0.4914004914004914, 0.4744958481613286)
more epochs:
......(same performance but lower loss)
And here is the main code:
for epoch in tqdm(range(200)):
loss = train_one_epoch(dataloader=source_train_dataloader,
model=model, optimizer=optimizer)
train_perf = test_one_epoch(dataloader=source_train_dataloader_for_test,
model=model)
dev_perf = test_one_epoch(dataloader=source_dev_dataloader, model=model)
test_perf = test_one_epoch(dataloader=source_test_dataloader, model=model)
base_result_loc = "bert_char_ps/bert_char_result"
# store performance result
add_model_result(
base_result_loc,
epoch,
loss,
train_perf,
dev_perf,
test_perf)
I can't not figure out why the loss keep decreasing but the performance on train set, dev set and test set keep unchanged. I have been stuck in this for a few days. Anyone knows how to handle this? It would be of great help.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/563/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/562 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/562/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/562/comments | https://api.github.com/repos/huggingface/transformers/issues/562/events | https://github.com/huggingface/transformers/pull/562 | 438,974,141 | MDExOlB1bGxSZXF1ZXN0Mjc0ODg0OTE1 | 562 | Small fix to remove shifting of lm labels during pre process of RocStories. | {
"login": "apappu97",
"id": 12404768,
"node_id": "MDQ6VXNlcjEyNDA0NzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/12404768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apappu97",
"html_url": "https://github.com/apappu97",
"followers_url": "https://api.github.com/users/apappu97/followers",
"following_url": "https://api.github.com/users/apappu97/following{/other_user}",
"gists_url": "https://api.github.com/users/apappu97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apappu97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apappu97/subscriptions",
"organizations_url": "https://api.github.com/users/apappu97/orgs",
"repos_url": "https://api.github.com/users/apappu97/repos",
"events_url": "https://api.github.com/users/apappu97/events{/privacy}",
"received_events_url": "https://api.github.com/users/apappu97/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Awesome, thanks!"
] | 1,556 | 1,556 | 1,556 | CONTRIBUTOR | null | In reference to https://github.com/huggingface/pytorch-pretrained-BERT/issues/473, remove the one shifting of lm labels since this shift happens internally during the model's forward pass.
@thomwolf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/562/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/562",
"html_url": "https://github.com/huggingface/transformers/pull/562",
"diff_url": "https://github.com/huggingface/transformers/pull/562.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/562.patch",
"merged_at": 1556702417000
} |
https://api.github.com/repos/huggingface/transformers/issues/561 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/561/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/561/comments | https://api.github.com/repos/huggingface/transformers/issues/561/events | https://github.com/huggingface/transformers/issues/561 | 438,963,757 | MDU6SXNzdWU0Mzg5NjM3NTc= | 561 | Training Transformer XL from scratch | {
"login": "anshuman1992",
"id": 46162317,
"node_id": "MDQ6VXNlcjQ2MTYyMzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/46162317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anshuman1992",
"html_url": "https://github.com/anshuman1992",
"followers_url": "https://api.github.com/users/anshuman1992/followers",
"following_url": "https://api.github.com/users/anshuman1992/following{/other_user}",
"gists_url": "https://api.github.com/users/anshuman1992/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anshuman1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anshuman1992/subscriptions",
"organizations_url": "https://api.github.com/users/anshuman1992/orgs",
"repos_url": "https://api.github.com/users/anshuman1992/repos",
"events_url": "https://api.github.com/users/anshuman1992/events{/privacy}",
"received_events_url": "https://api.github.com/users/anshuman1992/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This looks good to me",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@anshuman1992 could you share a code snippet/gist used for training TransformerXL model?\r\n",
"@anshuman1992 this will be great for me too"
] | 1,556 | 1,566 | 1,562 | NONE | null | Hello,
I'm trying to train a transformer XL model from scratch by combining the architecture code from this library and training code from the official paper repo. But this yields to NaNs during training, just wanted to clarify the recommended way to initialize a new model.
Im doing it by,
```
architecture = TransfoXLConfig().from_json_file(args.config_path)
model = TransfoXLLMHeadModel(architecture)
```
Is there a bug in this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/561/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/560 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/560/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/560/comments | https://api.github.com/repos/huggingface/transformers/issues/560/events | https://github.com/huggingface/transformers/pull/560 | 438,672,343 | MDExOlB1bGxSZXF1ZXN0Mjc0NjQ5OTg2 | 560 | Improvements to GPT-2 (special_tokens, fine-tuning, medium model) + repo code coverage metric | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/560?src=pr&el=h1) Report\n> :exclamation: No coverage uploaded for pull request base (`master@b832d5b`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).\n> The diff coverage is `70.37%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/560?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #560 +/- ##\n=========================================\n Coverage ? 66.04% \n=========================================\n Files ? 18 \n Lines ? 3673 \n Branches ? 0 \n=========================================\n Hits ? 2426 \n Misses ? 1247 \n Partials ? 0\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/560?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/modeling\\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/560/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfb3BlbmFpLnB5) | `79.68% <100%> (ø)` | |\n| [pytorch\\_pretrained\\_bert/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/560/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `84.86% <100%> (ø)` | |\n| [pytorch\\_pretrained\\_bert/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/560/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfZ3B0Mi5weQ==) | `80.16% <61.9%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/560?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/560?src=pr&el=footer). Last update [b832d5b...db98a4a](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/560?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,556 | 1,566 | 1,557 | MEMBER | null | - adding method to add special tokens to GPT-2 (like it's done for GPT).
- adding code coverage tracking for tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/560/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/560",
"html_url": "https://github.com/huggingface/transformers/pull/560",
"diff_url": "https://github.com/huggingface/transformers/pull/560.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/560.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/559 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/559/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/559/comments | https://api.github.com/repos/huggingface/transformers/issues/559/events | https://github.com/huggingface/transformers/issues/559 | 438,604,567 | MDU6SXNzdWU0Mzg2MDQ1Njc= | 559 | the size of words and the size of lables do not match | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Can you give the exact log of (and before) the error message?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | When I run bert-large-cased model, it prints "the size of words and the size of lables do not match" but get no error message. What is this issue? Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/559/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/558 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/558/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/558/comments | https://api.github.com/repos/huggingface/transformers/issues/558/events | https://github.com/huggingface/transformers/issues/558 | 438,511,599 | MDU6SXNzdWU0Mzg1MTE1OTk= | 558 | can one run squad using gpt2? | {
"login": "David-Levinthal",
"id": 8728143,
"node_id": "MDQ6VXNlcjg3MjgxNDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8728143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/David-Levinthal",
"html_url": "https://github.com/David-Levinthal",
"followers_url": "https://api.github.com/users/David-Levinthal/followers",
"following_url": "https://api.github.com/users/David-Levinthal/following{/other_user}",
"gists_url": "https://api.github.com/users/David-Levinthal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/David-Levinthal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/David-Levinthal/subscriptions",
"organizations_url": "https://api.github.com/users/David-Levinthal/orgs",
"repos_url": "https://api.github.com/users/David-Levinthal/repos",
"events_url": "https://api.github.com/users/David-Levinthal/events{/privacy}",
"received_events_url": "https://api.github.com/users/David-Levinthal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"2 years late, but can anyone figure it out?"
] | 1,556 | 1,637 | 1,562 | NONE | null | looking through the new notes discussing GPT-2 I do not understand how one might run a squad fine tuning on a pretrained gpt-2 model
Any assistance would be greatly appreciated | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/558/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/557 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/557/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/557/comments | https://api.github.com/repos/huggingface/transformers/issues/557/events | https://github.com/huggingface/transformers/issues/557 | 438,472,035 | MDU6SXNzdWU0Mzg0NzIwMzU= | 557 | Expanding vocab size for GTP2 pre-trained model. | {
"login": "adigoryl",
"id": 31667817,
"node_id": "MDQ6VXNlcjMxNjY3ODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/31667817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adigoryl",
"html_url": "https://github.com/adigoryl",
"followers_url": "https://api.github.com/users/adigoryl/followers",
"following_url": "https://api.github.com/users/adigoryl/following{/other_user}",
"gists_url": "https://api.github.com/users/adigoryl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adigoryl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adigoryl/subscriptions",
"organizations_url": "https://api.github.com/users/adigoryl/orgs",
"repos_url": "https://api.github.com/users/adigoryl/repos",
"events_url": "https://api.github.com/users/adigoryl/events{/privacy}",
"received_events_url": "https://api.github.com/users/adigoryl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@thomwolf Could you or someone from your team point me in the right direction to get the gtp2 model running with a small number of newly defined special tokens?\r\nAny help very appreciated as I really need to move on with my research project.",
"Hi @adigoryl, I'm adding this feature with PR #560\r\n\r\nYou can have a look.\r\n\r\nIt should be merged soon I guess.",
"Hi @thomwolf, first of all, I would like to thank you for the quick response and solution. I have had a look at the added lines and have replaced the 'modelling_gpt2.py' file in my pytorch_pretrained_bert lib. Running the code: `model = GPT2LMHeadModel.from_pretrained(args.model_name, num_special_tokens=len(special_tokens))` gives me:\r\n\r\n> model = cls(config, *inputs, **kwargs)\r\nTypeError: __init__() got an unexpected keyword argument 'num_special_tokens'\r\n\r\nI am not sure whether this happens because of the way I have updated my lib or there still is something missing.\r\nWhat is the best way to update my lib with the freshly made changes?\r\n\r\n--------------------------UPDATE-------------------------------\r\nCopy and paste seemed to work. The problem was that I needed to add a new line after the updated code paste since python is space sensitive. Having to fix it the num_special_tokens works in an anticipated way as I can see in the debugger that it sets the n_special field and updates total_tokens_embeddings. However, having this all fixed I still end up with the same issue I started with:\r\n\r\n> Traceback (most recent call last):\r\n File \"/Users/aw678/PycharmProjects/BERT/gtp2_train_lyrics_LM_copy.py\", line 202, in <module>\r\n main()\r\n File \"/Users/aw678/PycharmProjects/BERT/gtp2_train_lyrics_LM_copy.py\", line 178, in main\r\n losses, past = model(input_ids, lm_labels, past=past)\r\n File \"/Users/aw678/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 489, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/Users/aw678/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling_gpt2.py\", line 661, in forward\r\n hidden_states, presents = self.transformer(input_ids, position_ids, token_type_ids, past)\r\n File \"/Users/aw678/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 489, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/Users/aw678/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling_gpt2.py\", line 587, in forward\r\n position_embeds = self.wpe(position_ids)\r\n File \"/Users/aw678/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 489, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/Users/aw678/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/torch/nn/modules/sparse.py\", line 118, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/Users/aw678/anaconda3/envs/pytorch_conda_env/lib/python3.6/site-packages/torch/nn/functional.py\", line 1454, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nRuntimeError: index out of range at /Users/soumith/mc3build/conda-bld/pytorch_1549593514549/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:191\r\n\r\nNot sure why it complains about \"position_ids\" since they are not compulsory. I believe this may not be an issue with my code (just in case someone wants to have a look):\r\n[gtp2_train_lyrics_LM_copy.pdf](https://github.com/huggingface/pytorch-pretrained-BERT/files/3131933/gtp2_train_lyrics_LM_copy.pdf)\r\n\r\nIf you could provide a simplified working example of running GPT2 with new tokens then this should resolve my issue.\r\n",
"To replicate the error use the simplified version:\r\n[gpt2_simplified.py.zip](https://github.com/huggingface/pytorch-pretrained-BERT/files/3132183/gpt2_simplified.py.zip)\r\n",
"You have to install the repo from source from the PR branch (see the instructions in the readme to install from source and after cloning git checkout to the PR branch before installing).\r\n\r\nIf it looks too complicated maybe the best is to wait for the PR to be merged.",
"I have managed to update the library on my machine but it seems that there is an incompatibility in the lib code. If you could provide a working toy example on how to fine-tune GPT2 with special symbols then I am sure the community would appreciate it and my issue would be resolved. I have attached such toy example above in the zip file, however, it has an issue which I believe is caused by the lib.\r\n\r\nI am sorry to bother you so much but I just want to get on with my work. \r\nRegards, Adrian.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | About the aim:
I am trying to fine-tune a model on an English lyrics dataset in order to capture a style of a specific genre. To do this, at the fine-tuning input step, I wrap the lyrics with a "special token", e.g. <genre_type_tag> Lyrics text <genre_type_tag>. This means that I have to expand the vocab size by the number of special tokens.
Issue:
Using GTP2 tokenizer, I find that I can easily expand the vocab by specifying the special tokens:
`tokenizer = GPT2Tokenizer.from_pretrained(args.model_name, special_tokens=special_tokens)`.
However, the problem arises when I try to run the input through the model and get the following error:
> return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range at /Users/soumith/mc3build/conda-bld/pytorch_1549593514549/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:191
Which I believe says that the vocab id of the special token that I am using is out of bound since the model has been pre-trained without the them.
On the other hands, using OpenAIGTP model, I can see that this problem is solved by an additional parameter at the initialisation which tells the model to expect a number of special tags:
`model = OpenAIGPTDoubleHeadsModel.from_pretrained(args.model_name, num_special_tokens=len(special_tokens))`
I was wondering whether and how I can achieve a similar effect using GTP2 since it doesn't have such a parameter option.
To work around this issue I tried to alter the config file created using:
`config = GPT2Config.from_json_file(output_config_file)`, however, this gave me more issues and I am not sure whether that is the correct way to do it.
Kind regards.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/557/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/556 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/556/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/556/comments | https://api.github.com/repos/huggingface/transformers/issues/556/events | https://github.com/huggingface/transformers/issues/556 | 438,434,308 | MDU6SXNzdWU0Mzg0MzQzMDg= | 556 | Training beyond specified 't_total' steps with schedule 'warmup_linear'. Learning rate set to 0.0. Please set 't_total' of BertAdam correctly. | {
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Actually, shouldn't the `int()` be a `ceiling()`? Because let's say `args.gradient_accumulation_steps` is 1, then it is `ceiling(len(train_examples) / args.train_batch_size)` that is the number of batches in an epoch.",
"I am having the same problem with my finetuned model for gpt2",
"I am having the same issue in partially changed run_squad.py code.",
"I have the same issue.",
"Humm yes, we should probably change `int` to `ceiling` in this example indeed.",
"Is this a significant issue? If it's only the last few batches in the last epoch that are not being trained on, it shouldn't be a huge problem, right?\r\n\r\nAlso I find it strange that suddenly a lot of people are running into this bug in this past week (according to the replies to this issue) even though the `int` code was written 3 months ago. Is this also related to some other more recent changes?",
"Yes there was a huge refactoring of the `BertAdam` optimizer by @lukovnikov (#389, #445, #531)",
"Hi, this warning is printed to avoid wasted computations with warmup-linear or other surprises with other schedules due to a t_total set too low.\r\nAnd I think that line should be `int( math.ceil(len(train_examples) / args.train_batch_size) / args.gradient_accumulation_steps) * args.num_train_epochs` (@thomwolf) ?",
"Hi. I figured out the source of the problem: t_total, aka num_train_optimization_steps \r\n\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/examples/run_squad.py#L903\r\n\r\n is computed over the length of the train examples, while the true number of steps is determined by whatever convert_examples_to_features returns\r\n\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/3fc63f126ddf883ba9659f13ec046c3639db7b7e/examples/run_squad.py#L970\r\n\r\nA print(len(train_examples), len(train_features)) in line 980 returns:\r\n\r\n87599 191597 \r\n\r\n",
"You could also add the option `drop_last=True` to the `DataLoader`, then the number of samples will be calculated correctly.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I get error below while running the program.. Did I do any mistake?\r\n\r\nwarmup_linear = WarmupLinearSchedule( warmup=args.warmup_proportion,\r\nt_total=num_train_optimization_steps)\r\n\r\nlr_this_step = args.learning_rate * warmup_linear.get_lr(num_train_optimization_steps,\r\nargs.warmup_proportion)\r\n\r\nWARNING - pytorch_pretrained_bert.optimization - Training beyond specified 't_total'. Learning rate multiplier set to 0.0. Please set 't_total' of WarmupLinearSchedule correctly."
] | 1,556 | 1,568 | 1,564 | CONTRIBUTOR | null | I am seeing the above error in my training process. Is it a significant issue? Looks like it's related to `t_total`, which should be properly set here:
https://github.com/huggingface/pytorch-pretrained-BERT/blob/b832d5bb8a6dfc5965015b828e577677eace601e/examples/run_classifier.py#L742-L743
What could be potential causes of this issue? I trained exactly `args.num_train_epochs` epochs, and didn't alter the training data in between, so shouldn't this pre-calculated `t_total` work without issue?
My `len(train_examples)` is 49401, `args.num_train_epochs` is 5, using 2 GPUs, and other parameters are left as default. If it matters, my code is based on a version (68a889) before the recent `WarmupLinearSchedule` change. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/556/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/556/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/555 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/555/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/555/comments | https://api.github.com/repos/huggingface/transformers/issues/555/events | https://github.com/huggingface/transformers/issues/555 | 438,298,098 | MDU6SXNzdWU0MzgyOTgwOTg= | 555 | Transformer XL from Pytorch model | {
"login": "agemagician",
"id": 6087313,
"node_id": "MDQ6VXNlcjYwODczMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agemagician",
"html_url": "https://github.com/agemagician",
"followers_url": "https://api.github.com/users/agemagician/followers",
"following_url": "https://api.github.com/users/agemagician/following{/other_user}",
"gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agemagician/subscriptions",
"organizations_url": "https://api.github.com/users/agemagician/orgs",
"repos_url": "https://api.github.com/users/agemagician/repos",
"events_url": "https://api.github.com/users/agemagician/events{/privacy}",
"received_events_url": "https://api.github.com/users/agemagician/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | CONTRIBUTOR | null | Hello,
I have trained the original pytorch version of transformer xl, and I want to load it to get the hidden state and prediction.
However, it doesn't work. Apparently you only support to load a model from TensorFlow model checkpoints only.
Is there any hint or feature modification to make it work with model.pt and cache.pt ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/555/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/554 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/554/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/554/comments | https://api.github.com/repos/huggingface/transformers/issues/554/events | https://github.com/huggingface/transformers/issues/554 | 438,102,123 | MDU6SXNzdWU0MzgxMDIxMjM= | 554 | ValueError: For training, each question should have exactly 1 answer. | {
"login": "RAXAI",
"id": 32540275,
"node_id": "MDQ6VXNlcjMyNTQwMjc1",
"avatar_url": "https://avatars.githubusercontent.com/u/32540275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RAXAI",
"html_url": "https://github.com/RAXAI",
"followers_url": "https://api.github.com/users/RAXAI/followers",
"following_url": "https://api.github.com/users/RAXAI/following{/other_user}",
"gists_url": "https://api.github.com/users/RAXAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RAXAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RAXAI/subscriptions",
"organizations_url": "https://api.github.com/users/RAXAI/orgs",
"repos_url": "https://api.github.com/users/RAXAI/repos",
"events_url": "https://api.github.com/users/RAXAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/RAXAI/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Please give more information: the command used (arguments passed), traceback (the command line output), and version (can use `pip show pytorch_pretrained_bert`)\r\n\r\nI faced a similar problem with `read_squad_examples` when passing `input_file=dev.json` and `is_training=True`. ",
"I have this problem when training on SQUADv2 without `--version_2_with_negative` option. Basically for squad 2.0, it is possible there is no answer for questions. Adding this option in training command fixed the problem for me.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi,\r\n\r\nI see the same error. Some of the questions have multiple answer spans. \r\n\r\nThe error suggests that the current preprocessing codes cannot handle multiple answer spans for a given question.\r\n\r\n Has anyone fixed this ??\r\n\r\nThanks",
"I also receive this error when using the `--version_2_with_negative` flag paired with training data from SQuAD 2.0. It looks like it may be caused by some logic in `utils_squad.py`, lines 150-152; it seems that answerable questions are expected to only have one answer. I'm not familiar enough with the task and data set to know if this is a correct assumption, but since I'm using data from the SQuAD 2.0 web site I would think it should train fine.",
"For training, the assumption is true.\n\nOn Mon, Nov 18, 2019 at 09:56 Allen Kim <[email protected]> wrote:\n\n> I also receive this error when using the --version_2_with_negative flag\n> paired with training data from SQuAD 2.0. It looks like it may be caused by\n> some logic in utils_squad.py, lines 150-152; it seems that answerable\n> questions are expected to only have one answer. I'm not familiar enough\n> with the task and data set to know if this is a correct assumption, but\n> since I'm using data from the SQuAD 2.0 web site I would think it should\n> train fine.\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/554?email_source=notifications&email_token=AIEAE4GLKEIMFR3BUJQK5VTQUHY4BA5CNFSM4HI7NMRKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEI5VNI#issuecomment-554818229>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AIEAE4H7OARRY4INFJTGU6DQUHY4BANCNFSM4HI7NMRA>\n> .\n>\n",
"Thanks for clarifying!"
] | 1,556 | 1,574 | 1,563 | NONE | null | Tried to run run_squad.py with the squad 2.0 dataset and came up with this error, ValueError: For training, each question should have exactly 1 answer. How do I solve this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/554/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/553 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/553/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/553/comments | https://api.github.com/repos/huggingface/transformers/issues/553/events | https://github.com/huggingface/transformers/issues/553 | 438,094,501 | MDU6SXNzdWU0MzgwOTQ1MDE= | 553 | How to get back input and predictions as string | {
"login": "bikashg",
"id": 17159812,
"node_id": "MDQ6VXNlcjE3MTU5ODEy",
"avatar_url": "https://avatars.githubusercontent.com/u/17159812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bikashg",
"html_url": "https://github.com/bikashg",
"followers_url": "https://api.github.com/users/bikashg/followers",
"following_url": "https://api.github.com/users/bikashg/following{/other_user}",
"gists_url": "https://api.github.com/users/bikashg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bikashg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bikashg/subscriptions",
"organizations_url": "https://api.github.com/users/bikashg/orgs",
"repos_url": "https://api.github.com/users/bikashg/repos",
"events_url": "https://api.github.com/users/bikashg/events{/privacy}",
"received_events_url": "https://api.github.com/users/bikashg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | Once I am done fine tuning my `BertForSequenceClassification` model, I evaluate it on a validation set. I can see the loss and accuracy scores but I would also like to get the actual labels (as string) it predicted for each sentence (string) in the validation dataset. How could I do that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/553/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/552 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/552/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/552/comments | https://api.github.com/repos/huggingface/transformers/issues/552/events | https://github.com/huggingface/transformers/issues/552 | 438,035,356 | MDU6SXNzdWU0MzgwMzUzNTY= | 552 | should loss_scale be multiplied to the loss explicitly? | {
"login": "Jim-Song",
"id": 32925029,
"node_id": "MDQ6VXNlcjMyOTI1MDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/32925029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jim-Song",
"html_url": "https://github.com/Jim-Song",
"followers_url": "https://api.github.com/users/Jim-Song/followers",
"following_url": "https://api.github.com/users/Jim-Song/following{/other_user}",
"gists_url": "https://api.github.com/users/Jim-Song/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jim-Song/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jim-Song/subscriptions",
"organizations_url": "https://api.github.com/users/Jim-Song/orgs",
"repos_url": "https://api.github.com/users/Jim-Song/repos",
"events_url": "https://api.github.com/users/Jim-Song/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jim-Song/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | I noticed that in the run_swag.py, the following code is included
if args.fp16 and args.loss_scale != 1.0:
# rescale loss for fp16 training
# see https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html
loss = loss * args.loss_scale
and in run_squad.py, this is not included.
the optimizer in them are identity, so which is right? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/552/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/551 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/551/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/551/comments | https://api.github.com/repos/huggingface/transformers/issues/551/events | https://github.com/huggingface/transformers/issues/551 | 438,029,799 | MDU6SXNzdWU0MzgwMjk3OTk= | 551 | Pad inputs to multiple of 8 | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | MEMBER | null | Pad transformer's inputs to multiple of 8 to better use Tensorcores in fp16 mode.
@glample's [XLM](https://github.com/facebookresearch/XLM) does that and it seems still relevant with CUDA 10 (cc @yaroslavvb). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/551/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/550 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/550/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/550/comments | https://api.github.com/repos/huggingface/transformers/issues/550/events | https://github.com/huggingface/transformers/pull/550 | 438,015,484 | MDExOlB1bGxSZXF1ZXN0Mjc0MTUzMTk5 | 550 | Fix GPT2 crash on special quotes in Python 3 | {
"login": "AdamDanielKing",
"id": 5590173,
"node_id": "MDQ6VXNlcjU1OTAxNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5590173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdamDanielKing",
"html_url": "https://github.com/AdamDanielKing",
"followers_url": "https://api.github.com/users/AdamDanielKing/followers",
"following_url": "https://api.github.com/users/AdamDanielKing/following{/other_user}",
"gists_url": "https://api.github.com/users/AdamDanielKing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdamDanielKing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdamDanielKing/subscriptions",
"organizations_url": "https://api.github.com/users/AdamDanielKing/orgs",
"repos_url": "https://api.github.com/users/AdamDanielKing/repos",
"events_url": "https://api.github.com/users/AdamDanielKing/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdamDanielKing/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks, this is closed now with #564"
] | 1,556 | 1,556 | 1,556 | NONE | null | In Python 3 the line
https://github.com/huggingface/pytorch-pretrained-BERT/blob/b832d5bb8a6dfc5965015b828e577677eace601e/pytorch_pretrained_bert/tokenization_gpt2.py#L224
splits `token` into full characters, not UTF-8 bytes, so for example the right single quote ’ gives `ord('’') == 8217`. That causes a crash since it's a much larger key than any in `self.byte_encoder`. The official GPT2 repo uses `token.encode('utf-8')` but it doesn't work the same in Python 2. I've suggested a fix that uses `token.encode` only in Python 3.
Tested on Python 3.7 but not Python 2.
Thanks for this very useful repo! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/550/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/550",
"html_url": "https://github.com/huggingface/transformers/pull/550",
"diff_url": "https://github.com/huggingface/transformers/pull/550.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/550.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/549 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/549/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/549/comments | https://api.github.com/repos/huggingface/transformers/issues/549/events | https://github.com/huggingface/transformers/issues/549 | 438,005,556 | MDU6SXNzdWU0MzgwMDU1NTY= | 549 | CUDA out of memory issue when training | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Try reducing the batch size?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,606 | 1,562 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/549/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/548 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/548/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/548/comments | https://api.github.com/repos/huggingface/transformers/issues/548/events | https://github.com/huggingface/transformers/issues/548 | 437,993,372 | MDU6SXNzdWU0Mzc5OTMzNzI= | 548 | how to ensemble different checkpoints? | {
"login": "shawnkx",
"id": 15963237,
"node_id": "MDQ6VXNlcjE1OTYzMjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/15963237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shawnkx",
"html_url": "https://github.com/shawnkx",
"followers_url": "https://api.github.com/users/shawnkx/followers",
"following_url": "https://api.github.com/users/shawnkx/following{/other_user}",
"gists_url": "https://api.github.com/users/shawnkx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shawnkx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shawnkx/subscriptions",
"organizations_url": "https://api.github.com/users/shawnkx/orgs",
"repos_url": "https://api.github.com/users/shawnkx/repos",
"events_url": "https://api.github.com/users/shawnkx/events{/privacy}",
"received_events_url": "https://api.github.com/users/shawnkx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@shawnkx Hi! Have you found a solution?",
"@all, any updates on this?"
] | 1,556 | 1,640 | 1,562 | NONE | null | I want to ensemble different checkpoints trained from the same parameter configuration but different seeds. Could you tell me how to ensemble these checkpoints? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/548/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/547 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/547/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/547/comments | https://api.github.com/repos/huggingface/transformers/issues/547/events | https://github.com/huggingface/transformers/issues/547 | 437,991,834 | MDU6SXNzdWU0Mzc5OTE4MzQ= | 547 | How to get masked word prediction probabilities | {
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I'm interested in an answer, too. A score/probability would help to select the best word for a masked token.",
"You are looking for the softmax function: https://pytorch.org/docs/stable/nn.html?highlight=softmax#torch.nn.functional.softmax",
"Thanks Thomas, I'll give it a try.",
"Thanks,\r\nSo you say that for score x1 (where all the scores are x1,x2,.. xn) :\r\nprobability_x1 = (exp(^x1)/(exp(^x1) + exp(^x2) + .. exp(^xn))\r\n\r\n",
"@Oxi84 could you share how you obtained the masked word probabilities? I have been trying to do that on my custom data. That is, I want to pretrain my own model and then do masked word prediction on new data.",
"@rvoak The quickstart guide [here](https://github.com/huggingface/pytorch-transformers/blob/master/docs/source/quickstart.md#bert-example) shows a nice example of how to do masked word prediction. \r\nReplace \r\n```\r\n# confirm we were able to predict 'henson'\r\npredicted_index = torch.argmax(predictions[0, masked_index]).item()\r\npredicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]\r\n```\r\nwith something like this (e.g. if you want the top k predicted tokens):\r\n\r\n```\r\ntop_k = 10\r\nprobs = torch.nn.functional.softmax(predictions[0, mask_idx], dim=-1)\r\ntop_k_weights, top_k_indices = torch.topk(probs, top_k, sorted=True)\r\n\r\nfor i, pred_idx in enumerate(top_k_indicies):\r\n predicted_token = tokenizer.convert_ids_to_tokens([pred_idx])[0]\r\n token_weight = top_k_weights[i]\r\n```\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> @rvoak The quickstart guide [here](https://github.com/huggingface/pytorch-transformers/blob/master/docs/source/quickstart.md#bert-example) shows a nice example of how to do masked word prediction.\r\n> Replace\r\n> \r\n> ```\r\n> # confirm we were able to predict 'henson'\r\n> predicted_index = torch.argmax(predictions[0, masked_index]).item()\r\n> predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]\r\n> ```\r\n> \r\n> with something like this (e.g. if you want the top k predicted tokens):\r\n> \r\n> ```\r\n> top_k = 10\r\n> probs = torch.nn.functional.softmax(predictions[0, mask_idx], dim=-1)\r\n> top_k_weights, top_k_indices = torch.topk(probs, top_k, sorted=True)\r\n> \r\n> for i, pred_idx in enumerate(top_k_indicies):\r\n> predicted_token = tokenizer.convert_ids_to_tokens([pred_idx])[0]\r\n> token_weight = top_k_weights[i]\r\n> ```\r\n\r\nvery good example thanks! is there a version for RoBERTa and other models?",
"Hi @yuchenlin, you can use the recently added `fill-mask` pipeline to do so:\r\n\r\n```py\r\n>>> from transformers import pipeline\r\n>>> nlp = pipeline(\"fill-mask\", model=\"roberta-base\")\r\n>>> nlp(f\"This is the best thing I've {nlp.tokenizer.mask_token} in my life.\")\r\n[\r\n {'sequence': \"<s> This is the best thing I've done in my life.</s>\", 'score': 0.8024354577064514, 'token': 626}, \r\n {'sequence': \"<s> This is the best thing I've heard in my life.</s>\", 'score': 0.031355079263448715, 'token': 1317}, \r\n {'sequence': \"<s> This is the best thing I've learned in my life.</s>\", 'score': 0.027319395914673805, 'token': 2435}, \r\n {'sequence': \"<s> This is the best thing I've seen in my life.</s>\", 'score': 0.026892054826021194, 'token': 450}, \r\n {'sequence': \"<s> This is the best thing I've experienced in my life.</s>\", 'score': 0.02160099521279335, 'token': 2984}\r\n]\r\n```\r\n\r\nWe're in the process of adding example usage for common tasks (question answering, sequence classification, mask filling etc), you can follow the progress in https://github.com/huggingface/transformers/pull/2850. There already is an example for mask filling.",
"Hey @LysandreJik, does the fill-mask also support whole word mask prediction, or does it only work on subword level?",
"> Hi @yuchenlin, you can use the recently added `fill-mask` pipeline to do so:\r\n> \r\n> ```python\r\n> >>> from transformers import pipeline\r\n> >>> nlp = pipeline(\"fill-mask\", model=\"roberta-base\")\r\n> >>> nlp(f\"This is the best thing I've {nlp.tokenizer.mask_token} in my life.\")\r\n> [\r\n> {'sequence': \"<s> This is the best thing I've done in my life.</s>\", 'score': 0.8024354577064514, 'token': 626}, \r\n> {'sequence': \"<s> This is the best thing I've heard in my life.</s>\", 'score': 0.031355079263448715, 'token': 1317}, \r\n> {'sequence': \"<s> This is the best thing I've learned in my life.</s>\", 'score': 0.027319395914673805, 'token': 2435}, \r\n> {'sequence': \"<s> This is the best thing I've seen in my life.</s>\", 'score': 0.026892054826021194, 'token': 450}, \r\n> {'sequence': \"<s> This is the best thing I've experienced in my life.</s>\", 'score': 0.02160099521279335, 'token': 2984}\r\n> ]\r\n> ```\r\n> \r\n> We're in the process of adding example usage for common tasks (question answering, sequence classification, mask filling etc), you can follow the progress in #2850. There already is an example for mask filling.\r\n\r\nIs it possible to give this an input word for the mask and get probabilities back for that specific word?",
"Also is it possible to request the top N sentences rather than the default returned?\r\n\r\nEdit: Never mind on this specific question! I found out by setting:\r\n\r\n`nlp.topk = 20`\r\n\r\nbefore doing:\r\n\r\n` nlp(f\"This is the best thing I've {nlp.tokenizer.mask_token} in my life.\")`\r\n\r\nIt now returns 20.",
"Sure, you can do that using the recently added `targets` (in `v3.1.0`):\r\n\r\n```py\r\n>>> from transformers import pipeline\r\n>>> nlp = pipeline(\"fill-mask\", model=\"roberta-base\")\r\n>>> nlp(f\"This is the best thing I've {nlp.tokenizer.mask_token} in my life.\", targets=[' experienced'])\r\n[\r\n {\r\n 'sequence': \"<s>This is the best thing I've experienced in my life.</s>\", \r\n 'score': 0.022622672840952873, \r\n 'token': 2984, \r\n 'token_str': 'Ġexperienced'\r\n }\r\n]\r\n```\r\n\r\nPlease note the space before the word, because we're using the [RoBERTa tokenizer](https://huggingface.co/transformers/model_doc/roberta.html#robertatokenizer) which is a Byte-level BPE tokenizer that has a different behaviour according to the spaces before tokens.",
"@LysandreJik So very helpful! Thank you so much!",
"@LysandreJik If a word is at the start of a sentence, should it also have a space in front of it?:\r\n\r\n```\r\nnlp(f\"{nlp.tokenizer.mask_token} talk about the rules of the game first.\", targets=[' We\\'ll'])\r\n```\r\nWhich gives me:\r\n```\r\nThe specified target token ` We'll` does not exist in the model vocabulary. Replacing with `ĠWe`.\r\n[{'sequence': '<s> We talk about the rules of the game first</s>', 'score': 8.493712812196463e-06, 'token': 166, 'token_str': 'ĠWe'}]\r\n```\r\n\r\nOr\r\n\r\n```\r\nnlp(f\"{nlp.tokenizer.mask_token} talk about the rules of the game first.\", targets=['We\\'ll'])\r\n```\r\nWhich gives me:\r\n```\r\nThe specified target token `We'll` does not exist in the model vocabulary. Replacing with `We`.\r\n[{'sequence': '<s>We talk about the rules of the game first</s>', 'score': 0.12082401663064957, 'token': 170, 'token_str': 'We'}]\r\n```",
"How to predict a word that is seperated into several tokens. For example, DOTA2(name for a popular game)?"
] | 1,556 | 1,651 | 1,570 | NONE | null | Original sentence: i love apples. there are a lot of fruits in the world that i like, but apples would be my favorite fruit.
Masked sentence: i love apples . there are a lot of fruits in the world that i [MASK] , but apples would be my favorite fruit .
When I run through the pytorch version of bert, I get the following representations of probabilities:
Best predicted word: ['love'] tensor(12.7276, grad_fn=)
Other words along with their probabilities:
['like'] tensor(10.2872, grad_fn=)
['miss'] tensor(8.8226, grad_fn=)
['know'] tensor(8.5971, grad_fn=)
['am'] tensor(7.9407, grad_fn=)
['hate'] tensor(7.9209, grad_fn=)
['mean'] tensor(7.8873, grad_fn=)
['enjoy'] tensor(7.8813, grad_fn=)
['want'] tensor(7.6885, grad_fn=)
['prefer'] tensor(7.5712, grad_fn=)
I am quite sure that this does not mean that probability for word "love" is proportional to 12.7276 and for word "like" is 10.2872.
I also know that the summ of all func(this number) thought the whole vocabulary is 1. But I do not know what the func is?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/547/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/546 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/546/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/546/comments | https://api.github.com/repos/huggingface/transformers/issues/546/events | https://github.com/huggingface/transformers/issues/546 | 437,986,848 | MDU6SXNzdWU0Mzc5ODY4NDg= | 546 | Import Error | {
"login": "goyalsaransh97",
"id": 26386379,
"node_id": "MDQ6VXNlcjI2Mzg2Mzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/26386379?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goyalsaransh97",
"html_url": "https://github.com/goyalsaransh97",
"followers_url": "https://api.github.com/users/goyalsaransh97/followers",
"following_url": "https://api.github.com/users/goyalsaransh97/following{/other_user}",
"gists_url": "https://api.github.com/users/goyalsaransh97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/goyalsaransh97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/goyalsaransh97/subscriptions",
"organizations_url": "https://api.github.com/users/goyalsaransh97/orgs",
"repos_url": "https://api.github.com/users/goyalsaransh97/repos",
"events_url": "https://api.github.com/users/goyalsaransh97/events{/privacy}",
"received_events_url": "https://api.github.com/users/goyalsaransh97/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This should be fixed with the new release (0.6.2).",
"Unfortunately, I still get this error with the new release. Could that be because I had installed the package before some time ago (and removed it afterwards)? \r\n\r\nNever mind, got it running by cleaning up the environments/paths.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,563 | 1,563 | NONE | null | I'm getting error " ImportError: cannot import name 'WEIGHTS_NAME' from 'pytorch_pretrained_bert.file_utils' " on running run_squad.py. I've already tried building from source but the problem persists. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/546/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/545 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/545/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/545/comments | https://api.github.com/repos/huggingface/transformers/issues/545/events | https://github.com/huggingface/transformers/pull/545 | 437,968,723 | MDExOlB1bGxSZXF1ZXN0Mjc0MTIyOTkw | 545 | move pytroch_pretrained_bert cache folder under same path as torch | {
"login": "ailzhang",
"id": 5248122,
"node_id": "MDQ6VXNlcjUyNDgxMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5248122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ailzhang",
"html_url": "https://github.com/ailzhang",
"followers_url": "https://api.github.com/users/ailzhang/followers",
"following_url": "https://api.github.com/users/ailzhang/following{/other_user}",
"gists_url": "https://api.github.com/users/ailzhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ailzhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ailzhang/subscriptions",
"organizations_url": "https://api.github.com/users/ailzhang/orgs",
"repos_url": "https://api.github.com/users/ailzhang/repos",
"events_url": "https://api.github.com/users/ailzhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ailzhang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok, looks good, thanks @ailzhang!"
] | 1,556 | 1,557 | 1,557 | NONE | null | This PR does two things:
* Envs available:
PYTORCH_PRETRAINED_BERT_CACHE > TORCH_HOME > XDG_CACHE_HOME > `~/.cache`
* If no env is set, the default path is
`~/.cache/torch/pytorch_pretrained_bert` where `pytorch_pretrained_bert` is visible instead of hidden `.pytorch_pretrained_bert`. (since this is the cache folder, I feel it makes sense to make it visible, please correct me if I'm wrong :)
* minor: fix typo in `hubconf.py` example.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/545/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/545",
"html_url": "https://github.com/huggingface/transformers/pull/545",
"diff_url": "https://github.com/huggingface/transformers/pull/545.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/545.patch",
"merged_at": 1557327328000
} |
https://api.github.com/repos/huggingface/transformers/issues/544 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/544/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/544/comments | https://api.github.com/repos/huggingface/transformers/issues/544/events | https://github.com/huggingface/transformers/issues/544 | 437,774,086 | MDU6SXNzdWU0Mzc3NzQwODY= | 544 | TypeError: '<' not supported between instances of 'NoneType' and 'int' | {
"login": "quocnle",
"id": 1280494,
"node_id": "MDQ6VXNlcjEyODA0OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1280494?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quocnle",
"html_url": "https://github.com/quocnle",
"followers_url": "https://api.github.com/users/quocnle/followers",
"following_url": "https://api.github.com/users/quocnle/following{/other_user}",
"gists_url": "https://api.github.com/users/quocnle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quocnle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quocnle/subscriptions",
"organizations_url": "https://api.github.com/users/quocnle/orgs",
"repos_url": "https://api.github.com/users/quocnle/repos",
"events_url": "https://api.github.com/users/quocnle/events{/privacy}",
"received_events_url": "https://api.github.com/users/quocnle/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I also get this problem when predicting,Did you solved the problem?",
"Here is the problem during initialization of the optimizer:\r\n` t_total=num_train_optimization_steps)`\r\n\r\nThis var is initialized with `None` for the first time `num_train_optimization_steps = None`\r\nand it's initialized correctly only when `--do_train` flag is passed to the script\r\n```\r\n if args.do_train:\r\n train_examples = processor.get_train_examples(args.data_dir)\r\n num_train_optimization_steps = int(\r\n len(train_examples) / args.train_batch_size / args.gradient_accumulation_steps) * args.num_train_epochs\r\n if args.local_rank != -1:\r\n num_train_optimization_steps = num_train_optimization_steps // torch.distributed.get_world_size()\r\n```\r\nbut in case of `--do_eval` this var is `None` and you got an error from description.\r\n\r\nIt's a bug, I think, and should be fixed. For you local needs, just initialize the optimizer \"somehow\" - it's not used while evaluating.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | Hi, I am trying to do classification fine tuning using bert-base-uncased. I am using examples from master and pytorch_pretrained_bert==0.6.2. Here are my repro steps:
1. I create a train.tsv and dev.tsv file with my own domain data. The files contain sentences and labels separated by a tab. I put these files in /tmp/bertdata
2. I do fine tuning using: python run_classifier.py --data_dir /tmp/bertdata --bert_model bert-base-uncased --task_name sst-2 --do_lower_case --do_train --output_dir tmp. This works fine and a model, config json, and vocab.txt are placed in tmp
3. I try to use the fine tuned model on the dev.tsv set: python run_classifier.py --data_dir /tmp/bertdata --bert_model tmp --task_name sst-2 --do_lower_case --do_eval --output_dir tmp_result. When I do that, I get this error:
Traceback (most recent call last):
File "run_classifier.py", line 1024, in <module>
main()
File "run_classifier.py", line 794, in main
t_total=num_train_optimization_steps)
File "/home/ec2-user/anaconda3/lib/python3.6/site-packages/pytorch_pretrained_bert/optimization.py", line 215, in __init__
schedule = schedule_type(warmup=warmup, t_total=t_total)
File "/home/ec2-user/anaconda3/lib/python3.6/site-packages/pytorch_pretrained_bert/optimization.py", line 45, in __init__
if t_total < 0:
TypeError: '<' not supported between instances of 'NoneType' and 'int'
Anything obvious I am doing wrong? Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/544/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/543 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/543/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/543/comments | https://api.github.com/repos/huggingface/transformers/issues/543/events | https://github.com/huggingface/transformers/issues/543 | 437,741,208 | MDU6SXNzdWU0Mzc3NDEyMDg= | 543 | How to train our own domain-specific data instead of using pre-training models? | {
"login": "yiranxijie",
"id": 12460007,
"node_id": "MDQ6VXNlcjEyNDYwMDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/12460007?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yiranxijie",
"html_url": "https://github.com/yiranxijie",
"followers_url": "https://api.github.com/users/yiranxijie/followers",
"following_url": "https://api.github.com/users/yiranxijie/following{/other_user}",
"gists_url": "https://api.github.com/users/yiranxijie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yiranxijie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yiranxijie/subscriptions",
"organizations_url": "https://api.github.com/users/yiranxijie/orgs",
"repos_url": "https://api.github.com/users/yiranxijie/repos",
"events_url": "https://api.github.com/users/yiranxijie/events{/privacy}",
"received_events_url": "https://api.github.com/users/yiranxijie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I also have this question whenever someone gets to it, but I think that this isn't doable with this package. There's got to be a way to hack it, but you'd probably have to take away some of the code at the beginning of the pipeline. @yiranxijie ",
"Is there any news on this? Training one of these models from scratch?",
"@mattivi not yet",
"Hi all, so training from scratch will probably never be a goal for the present repo but here are great transformer codebases that were scaled to >64 GPUs:\r\n- XLM: https://github.com/facebookresearch/xlm\r\n- Megatron-LM: https://github.com/NVIDIA/Megatron-LM\r\n- fairseq: https://github.com/pytorch/fairseq\r\n\r\nNote that the typical compute required to train BERT is about 64 GPU for 4 days (which currently means around $10k-15k if you are renting cloud compute). TPU training is not possible in PyTorch currently, you should use a TensorFlow repo to do TPU training (like the original BERT or tensor2tensor for instance).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,563 | 1,563 | NONE | null | How to train our own domain-specific data instead of using pre-training models? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/543/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/543/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/542 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/542/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/542/comments | https://api.github.com/repos/huggingface/transformers/issues/542/events | https://github.com/huggingface/transformers/issues/542 | 437,702,121 | MDU6SXNzdWU0Mzc3MDIxMjE= | 542 | Clarifying attention mask | {
"login": "hadsed",
"id": 2019168,
"node_id": "MDQ6VXNlcjIwMTkxNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2019168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hadsed",
"html_url": "https://github.com/hadsed",
"followers_url": "https://api.github.com/users/hadsed/followers",
"following_url": "https://api.github.com/users/hadsed/following{/other_user}",
"gists_url": "https://api.github.com/users/hadsed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hadsed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadsed/subscriptions",
"organizations_url": "https://api.github.com/users/hadsed/orgs",
"repos_url": "https://api.github.com/users/hadsed/repos",
"events_url": "https://api.github.com/users/hadsed/events{/privacy}",
"received_events_url": "https://api.github.com/users/hadsed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The reason a classic binary attention mask won't work here is that the Softmax activation includes an exponential, and so an input of 0 can still yield quite a large softmax weight (since e^0 = 1).\r\n\r\nThe mask can't be applied after the softmax, because then the resulting values will not sum to 1. So the best solution is to add (not multiply!) a large negative value to the indices you want to mask. That means they will be 0 or almost 0 after the softmax step (because as you make x more negative, e^x becomes closer and closer to 0).",
"So you're recommending using a large negative value for the inputs you want to mask. It makes sense to me, though it seems the [documentation](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L671) ought to be updated, since it currently reads:\r\n```\r\n`attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices\r\n selected in [0, 1]. It's a mask to be used if the input sequence length is smaller than the max\r\n input sequence length in the current batch. It's the mask that we typically use for attention when\r\n a batch has varying length sentences.\r\n```\r\nAlthough I've been testing with 0 and it seems to produce the same vectors as when I only pass in a tensor of exactly the size I need. I understand this may not always be the case, however.",
"Note this code chunk: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L722-L728",
"Thank you, that clarifies everything.",
"@Rocketknight1 Hi, I would like to check the code chunk, but the url you provided is out dated, could you show the code here again? Thanks.",
"Hi, sorry! The repo code has changed massively since last year, so I don't know if there's a single chunk corresponding to that link anymore. However, if I recall, all it showed was a short code snippet where the attention_mask tensor was converted into the additive pre-softmax mask by first inverting it and then multiplying it by -10,000. Feel free to ask questions and @tag me if you're still uncertain.",
"@Rocketknight1 Thank you for your reply. Yes, I understand how to change attention_mask into a quite small negative value and why. But in modeling_bert.py file, it seems like there is no such a code chunk to convert attention_mask into a proper format. check this out https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L274",
"I found the corresponding source code: https://github.com/huggingface/transformers/issues/542",
"Hi, I got the same problem with you @YuanEric88 and I didn't find the code chunk to convert attention_mask from [0,1] to [-inf, 0]. The attention_mask is applied in [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L312)",
"@xiangrongzeng Just a passerby here - but I believe this is the method where `[0, 1]` attention masks are mapped to the `[-inf, 0]` range: https://github.com/huggingface/transformers/blob/88a951e3cc00f56b94d9b93dbc35a3812cd88747/src/transformers/modeling_utils.py#L221-L281\r\n\r\n...and the specific operation in question:\r\nhttps://github.com/huggingface/transformers/blob/88a951e3cc00f56b94d9b93dbc35a3812cd88747/src/transformers/modeling_utils.py#L274-L281\r\n\r\nThis method lives in the `ModuleUtilsMixin`, which I'm assuming is inherited by downstream models.",
"@kwonkyo Thankyou for your help :)"
] | 1,556 | 1,616 | 1,556 | NONE | null | I don't quite understand the attention mask in the way that it's implemented.
Here is the relevant line: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L312 :
```python
...
attention_scores = attention_scores / math.sqrt(self.attention_head_size)
# Apply the attention mask is (precomputed for all layers in BertModel forward() function)
attention_scores = attention_scores + attention_mask
# Normalize the attention scores to probabilities.
attention_probs = nn.Softmax(dim=-1)(attention_scores)
...
```
So it seems the proper way to use `attention_mask` is to set the positions you want to keep to 1's, and positions you want to mask out to 0's.
Curious why we don't simply multiply instead of add and then normalize? Is it for stability reasons? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/542/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/542/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/541 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/541/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/541/comments | https://api.github.com/repos/huggingface/transformers/issues/541/events | https://github.com/huggingface/transformers/issues/541 | 437,555,026 | MDU6SXNzdWU0Mzc1NTUwMjY= | 541 | Any way to reduce the model size to <250mb? | {
"login": "bolaft",
"id": 2310791,
"node_id": "MDQ6VXNlcjIzMTA3OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2310791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bolaft",
"html_url": "https://github.com/bolaft",
"followers_url": "https://api.github.com/users/bolaft/followers",
"following_url": "https://api.github.com/users/bolaft/following{/other_user}",
"gists_url": "https://api.github.com/users/bolaft/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bolaft/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bolaft/subscriptions",
"organizations_url": "https://api.github.com/users/bolaft/orgs",
"repos_url": "https://api.github.com/users/bolaft/repos",
"events_url": "https://api.github.com/users/bolaft/events{/privacy}",
"received_events_url": "https://api.github.com/users/bolaft/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Probably not - it would certainly be possible to make a smaller BERT model that would fit into this size, but all of the available pre-trained models have too many parameters, so you'd have to train it from scratch (which is very slow, and isn't something this repo supports yet).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | Google Cloud's online prediction service has a 250mb limit for uploaded models. I don't think I have ever seen a BERT model that small. Casting all tensors to half precision reduces the model to ~350mb, is there any way to go even further than that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/541/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/540 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/540/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/540/comments | https://api.github.com/repos/huggingface/transformers/issues/540/events | https://github.com/huggingface/transformers/issues/540 | 437,549,824 | MDU6SXNzdWU0Mzc1NDk4MjQ= | 540 | no to_json_file(file) in BERT | {
"login": "seanie12",
"id": 19561061,
"node_id": "MDQ6VXNlcjE5NTYxMDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/19561061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seanie12",
"html_url": "https://github.com/seanie12",
"followers_url": "https://api.github.com/users/seanie12/followers",
"following_url": "https://api.github.com/users/seanie12/following{/other_user}",
"gists_url": "https://api.github.com/users/seanie12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seanie12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seanie12/subscriptions",
"organizations_url": "https://api.github.com/users/seanie12/orgs",
"repos_url": "https://api.github.com/users/seanie12/repos",
"events_url": "https://api.github.com/users/seanie12/events{/privacy}",
"received_events_url": "https://api.github.com/users/seanie12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Are you using the latest release (0.6.2) ?",
"Yes I am.",
"Strange, `to_json_file` should be provided in 0.6.2 (cf code [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/e6cf62d49945e6277b5e4dc855f9186b3f789e35/pytorch_pretrained_bert/modeling.py#L222) and the associated test [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/68a889ee43916380f26a3c995e1638af41d75066/tests/modeling_test.py#L258))",
"Thanks :0) I'll check the version and code again.",
"After removing and install the package, the problem solved "
] | 1,556 | 1,556 | 1,556 | NONE | null | Hi, https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py#L1035
in the line 1035, I cannot use config.to_json_file(output_config_file) because there is no such function.
Instead I use
`file = model_to_save.config.to_json_string()`
`with open(file_path, "w") as f:`
` f.write(file)`
Is it correct way to save config file? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/540/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/539 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/539/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/539/comments | https://api.github.com/repos/huggingface/transformers/issues/539/events | https://github.com/huggingface/transformers/issues/539 | 437,532,185 | MDU6SXNzdWU0Mzc1MzIxODU= | 539 | Can we use 'bert-base-uncased' to question_answer just for start, rather rather than run_squad pretraining? | {
"login": "search4mahesh",
"id": 4182331,
"node_id": "MDQ6VXNlcjQxODIzMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4182331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/search4mahesh",
"html_url": "https://github.com/search4mahesh",
"followers_url": "https://api.github.com/users/search4mahesh/followers",
"following_url": "https://api.github.com/users/search4mahesh/following{/other_user}",
"gists_url": "https://api.github.com/users/search4mahesh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/search4mahesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/search4mahesh/subscriptions",
"organizations_url": "https://api.github.com/users/search4mahesh/orgs",
"repos_url": "https://api.github.com/users/search4mahesh/repos",
"events_url": "https://api.github.com/users/search4mahesh/events{/privacy}",
"received_events_url": "https://api.github.com/users/search4mahesh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, no you need to fine tune the model on a question answering task like SQuAD before you can use it"
] | 1,556 | 1,556 | 1,556 | NONE | null | Hi,
Can we use 'bert-base-uncased' to question_answer just for start, rather rather than run_squad pretraining?
model = BertForQuestionAnswering.from_pretrained('bert-base-uncased')
Thanks
Mahesh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/539/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/538 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/538/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/538/comments | https://api.github.com/repos/huggingface/transformers/issues/538/events | https://github.com/huggingface/transformers/issues/538 | 437,526,651 | MDU6SXNzdWU0Mzc1MjY2NTE= | 538 | key error in BertQuestionAsnwering predict? | {
"login": "search4mahesh",
"id": 4182331,
"node_id": "MDQ6VXNlcjQxODIzMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4182331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/search4mahesh",
"html_url": "https://github.com/search4mahesh",
"followers_url": "https://api.github.com/users/search4mahesh/followers",
"following_url": "https://api.github.com/users/search4mahesh/following{/other_user}",
"gists_url": "https://api.github.com/users/search4mahesh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/search4mahesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/search4mahesh/subscriptions",
"organizations_url": "https://api.github.com/users/search4mahesh/orgs",
"repos_url": "https://api.github.com/users/search4mahesh/repos",
"events_url": "https://api.github.com/users/search4mahesh/events{/privacy}",
"received_events_url": "https://api.github.com/users/search4mahesh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You found a solution?",
"@thomwolf , I made mistake in code, Repo code works just fine. Hence closed issue.\r\nThanks for this amazing repo :thumbsup:\r\n",
"How to solve this problem?\r\n",
"what was the solution ? Im seeing the same problem ",
"Hello! Do you mind opening a new issue with your problem?",
"Hi, I'm having the same error as described above. Is anyone able to post their solution?"
] | 1,556 | 1,625 | 1,556 | NONE | null | Hi,
I am getting key error while using BertQuestionAsnwering predict?
I am breaking following loop after 10 iterations
for input_ids, input_mask, segment_ids, example_indices in tqdm(eval_dataloader, desc="Evaluating", disable=local_rank not in [-1, 0]):
Thanks
Mahesh
Error:
KeyError Traceback (most recent call last)
<ipython-input-87-6ac2c26449fb> in <module>()
41 do_lower_case, output_prediction_file,
42 output_nbest_file, output_null_log_odds_file, verbose_logging,
---> 43 version_2_with_negative, null_score_diff_threshold)
/content/run_squad.py in write_predictions(all_examples, all_features, all_results, n_best_size, max_answer_length, do_lower_case, output_prediction_file, output_nbest_file, output_null_log_odds_file, verbose_logging, version_2_with_negative, null_score_diff_threshold)
473 null_end_logit = 0 # the end logit at the slice with min null score
474 for (feature_index, feature) in enumerate(features):
--> 475 result = unique_id_to_result[feature.unique_id]
476 start_indexes = _get_best_indexes(result.start_logits, n_best_size)
477 end_indexes = _get_best_indexes(result.end_logits, n_best_size)
KeyError: 1000000088 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/538/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/537 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/537/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/537/comments | https://api.github.com/repos/huggingface/transformers/issues/537/events | https://github.com/huggingface/transformers/issues/537 | 437,503,822 | MDU6SXNzdWU0Mzc1MDM4MjI= | 537 | New GPT2 tokenizer no longer encodes Unicode characters properly in Python 3 | {
"login": "alasdairtran",
"id": 10582768,
"node_id": "MDQ6VXNlcjEwNTgyNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/10582768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alasdairtran",
"html_url": "https://github.com/alasdairtran",
"followers_url": "https://api.github.com/users/alasdairtran/followers",
"following_url": "https://api.github.com/users/alasdairtran/following{/other_user}",
"gists_url": "https://api.github.com/users/alasdairtran/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alasdairtran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alasdairtran/subscriptions",
"organizations_url": "https://api.github.com/users/alasdairtran/orgs",
"repos_url": "https://api.github.com/users/alasdairtran/repos",
"events_url": "https://api.github.com/users/alasdairtran/events{/privacy}",
"received_events_url": "https://api.github.com/users/alasdairtran/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Just ran into this problem. This seems to be a regression from an earlier version of Huggingface.\r\n\r\nFor instance it fails when encoding the following wikipedia snippet\r\n> The dismemberment of the French socialist movement into many groups and—following the suppression\r\n\r\nThe dash here is \"long dash\" with unicode 8212. This worked in earlier version because it worked on bytes.",
"<img width=\"992\" alt=\"image\" src=\"https://user-images.githubusercontent.com/44499264/59059983-a5579180-88d2-11e9-9124-f7ce32f20419.png\">\r\nI can confirm that this is happening, though it is a different dash.",
"> <img alt=\"image\" width=\"992\" src=\"https://user-images.githubusercontent.com/44499264/59059983-a5579180-88d2-11e9-9124-f7ce32f20419.png\">\r\n> \r\n> I can confirm that this is happening, though it is a different dash.\r\n\r\nSame here:\r\nThis is also happening while using GPT2 tokenizer:\r\n\r\n`\r\nTraceback (most recent call last):\r\n File \"run_lambada_gpt2.py\", line 139, in tokenize_and_encode\r\n token_ids = tokenizer.encode(obj)\r\n File \"/data/anaconda/envs/py35/lib/python3.5/site-packages/pytorch_pretrained_bert/tokenization_gpt2.py\", line 261, in encode\r\n return self.convert_tokens_to_ids(self.tokenize(text))\r\n File \"/data/anaconda/envs/py35/lib/python3.5/site-packages/pytorch_pretrained_bert/tokenization_gpt2.py\", line 224, in tokenize\r\n token = ''.join(self.byte_encoder[ord(b)] for b in token)\r\n File \"/data/anaconda/envs/py35/lib/python3.5/site-packages/pytorch_pretrained_bert/tokenization_gpt2.py\", line 224, in <genexpr>\r\n token = ''.join(self.byte_encoder[ord(b)] for b in token)\r\nKeyError: 8217\r\n`\r\n\r\n\r\nThe sys version info is:\r\n`\r\nsys.version_info(major=3, minor=5, micro=5, releaselevel='final', serial=0)\r\n`\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi,\r\nI'm about to use this tokenizer with python3 on wiki-text.\r\nAfter seeing this issue - I'm not sure if it will work properly.\r\n\r\nCan someone clarify please? \r\nFrom reading along seems like the fix suggested above did not solve the problem, right?\r\n",
"Hi, this looks fixed to me in the current implementation. As long as you're using a recent version of the library you should be fine. I had no problem running a fine-tuning script on wikitext-2 last week.\r\n\r\nIf you run into anything, please let me know and I'll look into it.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,571 | 1,571 | NONE | null | In commit 5afa497cbfc53c679a9b22997b6312fad57ee2f8, you changed `token.encode('utf-8')` to simply `token`.
This would make the code compatible with Python 2, but now it breaks in Python 3. You'll get a KeyError when you try to encode a Unicode character that requires more than 1 byte in UTF-8 encoding. For example, this raises a KeyError in Python 3:
```python
from pytorch_pretrained_bert.tokenization_gpt2 import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
tokenizer.encode('你')
```
I think what you want to do is:
```python
if sys.version_info[0] == 2:
token = ''.join(self.byte_encoder[ord(b)] for b in token)
else:
token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/537/reactions",
"total_count": 18,
"+1": 13,
"-1": 0,
"laugh": 0,
"hooray": 5,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/537/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/536 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/536/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/536/comments | https://api.github.com/repos/huggingface/transformers/issues/536/events | https://github.com/huggingface/transformers/pull/536 | 437,348,950 | MDExOlB1bGxSZXF1ZXN0MjczNjQzNjM2 | 536 | Fix missing warmup_linear in run_classifier.py example | {
"login": "abhishekraok",
"id": 783844,
"node_id": "MDQ6VXNlcjc4Mzg0NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/783844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekraok",
"html_url": "https://github.com/abhishekraok",
"followers_url": "https://api.github.com/users/abhishekraok/followers",
"following_url": "https://api.github.com/users/abhishekraok/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekraok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekraok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekraok/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekraok/orgs",
"repos_url": "https://api.github.com/users/abhishekraok/repos",
"events_url": "https://api.github.com/users/abhishekraok/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekraok/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I see there is already a PR to fix this, I will close this."
] | 1,556 | 1,556 | 1,556 | NONE | null | Replaced warmup_linear function call with WarmupLinearSchedule | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/536/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/536",
"html_url": "https://github.com/huggingface/transformers/pull/536",
"diff_url": "https://github.com/huggingface/transformers/pull/536.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/536.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/535 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/535/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/535/comments | https://api.github.com/repos/huggingface/transformers/issues/535/events | https://github.com/huggingface/transformers/issues/535 | 437,334,088 | MDU6SXNzdWU0MzczMzQwODg= | 535 | gpt2 fine tuning sources | {
"login": "radiodee1",
"id": 8641916,
"node_id": "MDQ6VXNlcjg2NDE5MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8641916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/radiodee1",
"html_url": "https://github.com/radiodee1",
"followers_url": "https://api.github.com/users/radiodee1/followers",
"following_url": "https://api.github.com/users/radiodee1/following{/other_user}",
"gists_url": "https://api.github.com/users/radiodee1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/radiodee1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/radiodee1/subscriptions",
"organizations_url": "https://api.github.com/users/radiodee1/orgs",
"repos_url": "https://api.github.com/users/radiodee1/repos",
"events_url": "https://api.github.com/users/radiodee1/events{/privacy}",
"received_events_url": "https://api.github.com/users/radiodee1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I encountered the same issue",
"Also looking for how to finetune the GPT2 model, thanks.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,571 | 1,571 | NONE | null | hi. I'm looking to fine tune the gpt2 model. I missed the part where that sort of fine tuning is taking place. Can someone point out where that code is (...or maybe where an example might be elsewhere on line)? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/535/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/534 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/534/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/534/comments | https://api.github.com/repos/huggingface/transformers/issues/534/events | https://github.com/huggingface/transformers/issues/534 | 437,285,235 | MDU6SXNzdWU0MzcyODUyMzU= | 534 | How many datasets does Bert use in pretraining process? | {
"login": "DecstionBack",
"id": 9391083,
"node_id": "MDQ6VXNlcjkzOTEwODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9391083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DecstionBack",
"html_url": "https://github.com/DecstionBack",
"followers_url": "https://api.github.com/users/DecstionBack/followers",
"following_url": "https://api.github.com/users/DecstionBack/following{/other_user}",
"gists_url": "https://api.github.com/users/DecstionBack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DecstionBack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DecstionBack/subscriptions",
"organizations_url": "https://api.github.com/users/DecstionBack/orgs",
"repos_url": "https://api.github.com/users/DecstionBack/repos",
"events_url": "https://api.github.com/users/DecstionBack/events{/privacy}",
"received_events_url": "https://api.github.com/users/DecstionBack/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,562 | 1,562 | NONE | null | Hi all,
I try to generate the pretraining corpus for BERT with pregenerate_training_data.py. In the BERT paper, it reports about 6M+ instances(segment A+segmentB, less than 512 tokens). But I get 18M instances, which is almost 3 time than BERT uses. Does anyone have any idea on the result and does anyone know if I need to process WikiPedia and BookCorpus first and then try to generate training instances? Thanks very much in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/534/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/533 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/533/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/533/comments | https://api.github.com/repos/huggingface/transformers/issues/533/events | https://github.com/huggingface/transformers/pull/533 | 437,224,702 | MDExOlB1bGxSZXF1ZXN0MjczNTQ1Mjcx | 533 | Docs for new learning rate code | {
"login": "lukovnikov",
"id": 1732910,
"node_id": "MDQ6VXNlcjE3MzI5MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1732910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukovnikov",
"html_url": "https://github.com/lukovnikov",
"followers_url": "https://api.github.com/users/lukovnikov/followers",
"following_url": "https://api.github.com/users/lukovnikov/following{/other_user}",
"gists_url": "https://api.github.com/users/lukovnikov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukovnikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukovnikov/subscriptions",
"organizations_url": "https://api.github.com/users/lukovnikov/orgs",
"repos_url": "https://api.github.com/users/lukovnikov/repos",
"events_url": "https://api.github.com/users/lukovnikov/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukovnikov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great thanks!",
"The curves plot in the README are beautiful (and perfect size), awesome!"
] | 1,556 | 1,556 | 1,556 | CONTRIBUTOR | null | - Added documentation for learning rate schedules in main README
- added some pictures for the README in docs/imgs/ (not sure if it's the best place)
- updated some docs in code for optimization | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/533/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/533",
"html_url": "https://github.com/huggingface/transformers/pull/533",
"diff_url": "https://github.com/huggingface/transformers/pull/533.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/533.patch",
"merged_at": 1556218956000
} |
https://api.github.com/repos/huggingface/transformers/issues/532 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/532/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/532/comments | https://api.github.com/repos/huggingface/transformers/issues/532/events | https://github.com/huggingface/transformers/issues/532 | 437,219,614 | MDU6SXNzdWU0MzcyMTk2MTQ= | 532 | [Feature request] Support configurable BertLayerNorm epsilon | {
"login": "huntzhan",
"id": 5213906,
"node_id": "MDQ6VXNlcjUyMTM5MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5213906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huntzhan",
"html_url": "https://github.com/huntzhan",
"followers_url": "https://api.github.com/users/huntzhan/followers",
"following_url": "https://api.github.com/users/huntzhan/following{/other_user}",
"gists_url": "https://api.github.com/users/huntzhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huntzhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huntzhan/subscriptions",
"organizations_url": "https://api.github.com/users/huntzhan/orgs",
"repos_url": "https://api.github.com/users/huntzhan/repos",
"events_url": "https://api.github.com/users/huntzhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/huntzhan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I'm closing this in favor of #514 to gather all the discussion on ERNIE."
] | 1,556 | 1,556 | 1,556 | CONTRIBUTOR | null | It would be great if we could configure `eps` in layer normalization since model like ERNIE uses `eps=1e-5` instead of `1e-12`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/532/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/531 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/531/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/531/comments | https://api.github.com/repos/huggingface/transformers/issues/531/events | https://github.com/huggingface/transformers/pull/531 | 437,178,066 | MDExOlB1bGxSZXF1ZXN0MjczNTA3OTY3 | 531 | fixed new LR API in examples | {
"login": "lukovnikov",
"id": 1732910,
"node_id": "MDQ6VXNlcjE3MzI5MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1732910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukovnikov",
"html_url": "https://github.com/lukovnikov",
"followers_url": "https://api.github.com/users/lukovnikov/followers",
"following_url": "https://api.github.com/users/lukovnikov/following{/other_user}",
"gists_url": "https://api.github.com/users/lukovnikov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukovnikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukovnikov/subscriptions",
"organizations_url": "https://api.github.com/users/lukovnikov/orgs",
"repos_url": "https://api.github.com/users/lukovnikov/repos",
"events_url": "https://api.github.com/users/lukovnikov/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukovnikov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,556 | 1,556 | 1,556 | CONTRIBUTOR | null | .get_lr() of \_LRSchedule objects expects a step while .get_lr_() expects training progress fraction | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/531/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/531",
"html_url": "https://github.com/huggingface/transformers/pull/531",
"diff_url": "https://github.com/huggingface/transformers/pull/531.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/531.patch",
"merged_at": 1556218879000
} |
https://api.github.com/repos/huggingface/transformers/issues/530 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/530/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/530/comments | https://api.github.com/repos/huggingface/transformers/issues/530/events | https://github.com/huggingface/transformers/issues/530 | 436,962,766 | MDU6SXNzdWU0MzY5NjI3NjY= | 530 | GPT2 training and generating on text longer than 1024 | {
"login": "apappu97",
"id": 12404768,
"node_id": "MDQ6VXNlcjEyNDA0NzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/12404768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apappu97",
"html_url": "https://github.com/apappu97",
"followers_url": "https://api.github.com/users/apappu97/followers",
"following_url": "https://api.github.com/users/apappu97/following{/other_user}",
"gists_url": "https://api.github.com/users/apappu97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apappu97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apappu97/subscriptions",
"organizations_url": "https://api.github.com/users/apappu97/orgs",
"repos_url": "https://api.github.com/users/apappu97/repos",
"events_url": "https://api.github.com/users/apappu97/events{/privacy}",
"received_events_url": "https://api.github.com/users/apappu97/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The default text generation example in the codebase will generate unlimited length.\r\n\r\nHowever, each prediction is only influenced by current context (1024 tokens long). Something like [transformer-xl](https://github.com/kimiyoung/transformer-xl/tree/master/pytorch) is needed to depend on things outside of current context",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@apappu97 Do you know how to input a sequence longer than 1024 using the pretrained models now? Thank you.",
"I get an error when I try to generate with, for example, `--length 10000`.\r\n\r\n````\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\n../aten/src/ATen/native/cuda/Indexing.cu:922: indexSelectSmallIndex: block: [3,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:922: indexSelectSmallIndex: block: [3,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n(many similar lines with ascending indices)\r\n\r\nTraceback (most recent call last):\r\n File \"transformers/examples/pytorch/text-generation/run_generation.py\", line 294, in <module>\r\n main()\r\n File \"transformers/examples/pytorch/text-generation/run_generation.py\", line 252, in main\r\n output_sequences = model.generate(\r\n File \"venv/lib/python3.10/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"transformers/src/transformers/generation_utils.py\", line 1380, in generate\r\n return self.sample(\r\n File \"transformers/src/transformers/generation_utils.py\", line 1996, in sample\r\n outputs = self(\r\n File \"venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"transformers/src/transformers/models/gpt2/modeling_gpt2.py\", line 1046, in forward\r\n transformer_outputs = self.transformer(\r\n File \"venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"transformers/src/transformers/models/gpt2/modeling_gpt2.py\", line 889, in forward\r\n outputs = block(\r\n File \"venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"transformers/src/transformers/models/gpt2/modeling_gpt2.py\", line 389, in forward\r\n attn_outputs = self.attn(\r\n File \"venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1130, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"transformers/src/transformers/models/gpt2/modeling_gpt2.py\", line 330, in forward\r\n attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)\r\n File \"transformers/src/transformers/models/gpt2/modeling_gpt2.py\", line 185, in _attn\r\n attn_weights = attn_weights / torch.tensor(\r\nRuntimeError: CUDA error: device-side assert triggered\r\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\n````\r\n\r\n`1000` seems to be a safe length, but even `1023` can result in errors.\r\n\r\nFull command line:\r\n\r\n python transformers/examples/pytorch/text-generation/run_generation.py --model_type gpt2 --length 10000 --num_return_sequences 10 --model_name_or_path tuned_model/checkpoint-100000\r\n\r\n",
"With a recent git checkout I do not get the error, but the generation script gets a hardcoded limit for text generation from the model class.\r\n\r\nhttps://github.com/huggingface/transformers/blob/4eb918e656944df2757513c535e8ad8c01d632e2/examples/pytorch/text-generation/run_generation.py#L222\r\n\r\nThe input seems to be also quite limited (no idea how many tokens, but probably something around 20-30), so running generation with the last 1024 tokens won't work."
] | 1,556 | 1,667 | 1,561 | CONTRIBUTOR | null | Hello,
First, thanks so much for all of the open source work here! This has been super useful to build off of.
I noticed that the size of the pretrained positional embedding set for GPT2 is 1024, and was wondering if there were standard methods or suggestions for (a) running the language model head over text longer than 1024 tokens (post BPE encoding) and (b) generating text longer than 1024 BPE tokens. Would appreciate suggestions or pointers to other sources on how to handle this, thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/530/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/529 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/529/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/529/comments | https://api.github.com/repos/huggingface/transformers/issues/529/events | https://github.com/huggingface/transformers/issues/529 | 436,691,723 | MDU6SXNzdWU0MzY2OTE3MjM= | 529 | Why classifier fine-tuning don't save best model based on the evaluation on dev dataset | {
"login": "nghuyong",
"id": 16462374,
"node_id": "MDQ6VXNlcjE2NDYyMzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/16462374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nghuyong",
"html_url": "https://github.com/nghuyong",
"followers_url": "https://api.github.com/users/nghuyong/followers",
"following_url": "https://api.github.com/users/nghuyong/following{/other_user}",
"gists_url": "https://api.github.com/users/nghuyong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nghuyong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nghuyong/subscriptions",
"organizations_url": "https://api.github.com/users/nghuyong/orgs",
"repos_url": "https://api.github.com/users/nghuyong/repos",
"events_url": "https://api.github.com/users/nghuyong/events{/privacy}",
"received_events_url": "https://api.github.com/users/nghuyong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"were you able to fix this problem. If yes can you please tell how"
] | 1,556 | 1,588 | 1,561 | CONTRIBUTOR | null | I want to use bert to train a classify model, I use the example [run_classifier.py].
But I find that the model will continue to train on train dataset until the max_epoch, without doing the evaluation on the dev dataset and save the best model according to the metric on the dev dataset.
So, the final saved model, it just the last epoch, but this saved model will not be the best model on the dev dataset!
Also, I suggest adding a args --predict and only make predictions.
This work helps me a lot! Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/529/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/528 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/528/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/528/comments | https://api.github.com/repos/huggingface/transformers/issues/528/events | https://github.com/huggingface/transformers/issues/528 | 436,680,415 | MDU6SXNzdWU0MzY2ODA0MTU= | 528 | __init__() got an unexpected keyword argument 'do_basic_tokenize' | {
"login": "lcswillems",
"id": 5437552,
"node_id": "MDQ6VXNlcjU0Mzc1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5437552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lcswillems",
"html_url": "https://github.com/lcswillems",
"followers_url": "https://api.github.com/users/lcswillems/followers",
"following_url": "https://api.github.com/users/lcswillems/following{/other_user}",
"gists_url": "https://api.github.com/users/lcswillems/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lcswillems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lcswillems/subscriptions",
"organizations_url": "https://api.github.com/users/lcswillems/orgs",
"repos_url": "https://api.github.com/users/lcswillems/repos",
"events_url": "https://api.github.com/users/lcswillems/events{/privacy}",
"received_events_url": "https://api.github.com/users/lcswillems/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Which version of pytorch-pretrained-bert are you using?\r\nCan you give the full error message to see which call to `__init__()` is failing?\r\nWe should have the keyword argument [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/3d78e226e68a5c5d0ef612132b601024c3534e38/pytorch_pretrained_bert/tokenization.py#L77) ",
"I have the last version (0.6.1).\r\n\r\nThis is what I have on my computer:\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,561 | 1,561 | NONE | null | In the README, this line is written:
```
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True, do_basic_tokenize=True)
```
But when I execute it, I get this error:
```
__init__() got an unexpected keyword argument 'do_basic_tokenize'
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/528/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/527 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/527/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/527/comments | https://api.github.com/repos/huggingface/transformers/issues/527/events | https://github.com/huggingface/transformers/pull/527 | 436,660,128 | MDExOlB1bGxSZXF1ZXN0MjczMTAxNzg4 | 527 | Update example files so that tr_loss is not affected by args.gradient… | {
"login": "Mathieu-Prouveur",
"id": 24923813,
"node_id": "MDQ6VXNlcjI0OTIzODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/24923813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mathieu-Prouveur",
"html_url": "https://github.com/Mathieu-Prouveur",
"followers_url": "https://api.github.com/users/Mathieu-Prouveur/followers",
"following_url": "https://api.github.com/users/Mathieu-Prouveur/following{/other_user}",
"gists_url": "https://api.github.com/users/Mathieu-Prouveur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mathieu-Prouveur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mathieu-Prouveur/subscriptions",
"organizations_url": "https://api.github.com/users/Mathieu-Prouveur/orgs",
"repos_url": "https://api.github.com/users/Mathieu-Prouveur/repos",
"events_url": "https://api.github.com/users/Mathieu-Prouveur/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mathieu-Prouveur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @Mathieu-Prouveur, thanks for that.\r\nIndeed I think using `tr_loss/global_step` would be more easy to read.\r\nCan you update this? ",
"Sure, I've just done the update ",
"Great, thanks!"
] | 1,556 | 1,556 | 1,556 | NONE | null | Hi developpers!
Fix training loss value :
* if gradient_accumulation_steps > 1 then the batch loss value(which is a mean) is scaled by a factor 1/args.gradient_accumulation_steps.
To compare it to evaluation loss it is thus necessary to scale it back by multiplying by args.gradient_accumulation_steps (as done in finetuning script)
Another way to fix this would be to replace the lines with tr_loss/nb_tr_steps by tr_loss/global_step. I thought you might want to consider this alternative | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/527/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/527",
"html_url": "https://github.com/huggingface/transformers/pull/527",
"diff_url": "https://github.com/huggingface/transformers/pull/527.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/527.patch",
"merged_at": 1556615575000
} |
https://api.github.com/repos/huggingface/transformers/issues/526 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/526/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/526/comments | https://api.github.com/repos/huggingface/transformers/issues/526/events | https://github.com/huggingface/transformers/issues/526 | 436,561,267 | MDU6SXNzdWU0MzY1NjEyNjc= | 526 | Will BERT weights for SQuAD be released? | {
"login": "lcswillems",
"id": 5437552,
"node_id": "MDQ6VXNlcjU0Mzc1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5437552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lcswillems",
"html_url": "https://github.com/lcswillems",
"followers_url": "https://api.github.com/users/lcswillems/followers",
"following_url": "https://api.github.com/users/lcswillems/following{/other_user}",
"gists_url": "https://api.github.com/users/lcswillems/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lcswillems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lcswillems/subscriptions",
"organizations_url": "https://api.github.com/users/lcswillems/orgs",
"repos_url": "https://api.github.com/users/lcswillems/repos",
"events_url": "https://api.github.com/users/lcswillems/events{/privacy}",
"received_events_url": "https://api.github.com/users/lcswillems/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi Lucas, probably not.\r\n\r\nThe goal of this repository is to provide easy access to pretrained model for transfer learning research. \r\n\r\nProviding downstream task models will make us handle a combinatory explosion of combinations to provide the various pretrained BERT models fine-tuned on each GLUE/SQuAD task with hyper-parameters optimization and all the relevant adaptation decision that are still mostly open research questions.\r\n\r\nBut we do provide examples for fine-tuning that gives decent results and can be trained in a reasonable time on standard cloud compute.",
"@lcswillems were you able to find the weights anywhere else? ",
"Looks like HF released them after all\r\n\r\nhttps://huggingface.co/transformers/pretrained_models.html"
] | 1,556 | 1,575 | 1,556 | NONE | null | Hi,
Are you going to release the weights after training on SQuAD 2.0?
Thank you for your great work.
Best,
Lucas Willems | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/526/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/525 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/525/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/525/comments | https://api.github.com/repos/huggingface/transformers/issues/525/events | https://github.com/huggingface/transformers/issues/525 | 436,513,242 | MDU6SXNzdWU0MzY1MTMyNDI= | 525 | Should I use weight_decay or weight_decay_rate? | {
"login": "lemonhu",
"id": 22219073,
"node_id": "MDQ6VXNlcjIyMjE5MDcz",
"avatar_url": "https://avatars.githubusercontent.com/u/22219073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lemonhu",
"html_url": "https://github.com/lemonhu",
"followers_url": "https://api.github.com/users/lemonhu/followers",
"following_url": "https://api.github.com/users/lemonhu/following{/other_user}",
"gists_url": "https://api.github.com/users/lemonhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lemonhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lemonhu/subscriptions",
"organizations_url": "https://api.github.com/users/lemonhu/orgs",
"repos_url": "https://api.github.com/users/lemonhu/repos",
"events_url": "https://api.github.com/users/lemonhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/lemonhu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"According to the instructions [module-torch.optim](https://pytorch.org/docs/stable/optim.html?highlight=torch%20optim#module-torch.optim) from PyTorch API and [fused_adam.py](https://github.com/NVIDIA/apex/blob/master/apex/optimizers/fused_adam.py) from apex repo, I think `weight_decay` and `weight_decay_rate` are unified and unified into `weight_decay`, is it correct to understand?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,561 | 1,561 | NONE | null | Thanks for the awesome work.
Just as line [simple_lm_finetuning.py#L540](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/lm_finetuning/simple_lm_finetuning.py#L540), When I use bert for downstream tasks, should I use `weight_decay` or `weight_decay_rate` when I add a decay operation to the training parameters?
What if I use apex for mixed precision training? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/525/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/524 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/524/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/524/comments | https://api.github.com/repos/huggingface/transformers/issues/524/events | https://github.com/huggingface/transformers/issues/524 | 436,308,588 | MDU6SXNzdWU0MzYzMDg1ODg= | 524 | Mixed up isNextSentence label in simple_lm_finetuning.py script? | {
"login": "yakazimir",
"id": 1296330,
"node_id": "MDQ6VXNlcjEyOTYzMzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1296330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yakazimir",
"html_url": "https://github.com/yakazimir",
"followers_url": "https://api.github.com/users/yakazimir/followers",
"following_url": "https://api.github.com/users/yakazimir/following{/other_user}",
"gists_url": "https://api.github.com/users/yakazimir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yakazimir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yakazimir/subscriptions",
"organizations_url": "https://api.github.com/users/yakazimir/orgs",
"repos_url": "https://api.github.com/users/yakazimir/repos",
"events_url": "https://api.github.com/users/yakazimir/events{/privacy}",
"received_events_url": "https://api.github.com/users/yakazimir/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
}
] | closed | false | null | [] | [
"Hi, why should it be the other way around?",
"I think I mixed up the meaning of 0 and 1 in this context and maybe wrote this post a bit too quickly before looking deeper into the code and documentation.. (sorry!). On second glance, the documentation for the BertForPreTraining is rather clear: \r\n\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/d76a57b0ba198eee27b3777f57fcabb6aba8b965/pytorch_pretrained_bert/modeling.py#L766 \r\n\r\nI was confused why 0 should mean \"true\" is this case (i.e., is a next sentence continuation) since in classification 0 often means \"false\", but whatever, the way it is written is sound (albeit a little counterintuitive at first glance). ",
"@yakazimir yeah I was confused too. Thanks for your research"
] | 1,556 | 1,564 | 1,556 | NONE | null | I'm wondering if the isNextsentence "label" in the below function is correct? Shouldn't the label be 1 in the case that t1,t2 are taken from self.get_corpus_line(index) (i.e., the first condition on line 150), and 0 if it is random (line 153)?
https://github.com/huggingface/pytorch-pretrained-BERT/blob/c36cca075a32f59a5ec2083e1d39e7d6564c105b/examples/lm_finetuning/simple_lm_finetuning.py#L141-L157 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/524/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/523 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/523/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/523/comments | https://api.github.com/repos/huggingface/transformers/issues/523/events | https://github.com/huggingface/transformers/issues/523 | 436,177,142 | MDU6SXNzdWU0MzYxNzcxNDI= | 523 | ImportError: cannot import name 'WEIGHTS_NAME' from 'pytorch_pretrained_bert.file_utils' | {
"login": "lcswillems",
"id": 5437552,
"node_id": "MDQ6VXNlcjU0Mzc1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5437552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lcswillems",
"html_url": "https://github.com/lcswillems",
"followers_url": "https://api.github.com/users/lcswillems/followers",
"following_url": "https://api.github.com/users/lcswillems/following{/other_user}",
"gists_url": "https://api.github.com/users/lcswillems/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lcswillems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lcswillems/subscriptions",
"organizations_url": "https://api.github.com/users/lcswillems/orgs",
"repos_url": "https://api.github.com/users/lcswillems/repos",
"events_url": "https://api.github.com/users/lcswillems/events{/privacy}",
"received_events_url": "https://api.github.com/users/lcswillems/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Same is happening for `run_classifier.py` ",
"Yes the examples currently require to install from source (see the section in the readme).\r\nI'll release a new version tomorrow so the pip release will be in sync with `master` examples again.",
"Okay, thank you :)",
"Waiting for this; installing from source gives the error : `ImportError: cannot import name 'warmup_linear'`",
"> Waiting for this; installing from source gives the error : `ImportError: cannot import name 'warmup_linear'`\r\n\r\nIt is not actually using `warmup_linear` so you can safely remove that from the file ",
"Ok, I've just published and uploaded the new v0.6.2 release on pip which should fix this (among other things). Release notes are [here](https://github.com/huggingface/pytorch-pretrained-BERT/releases/tag/v0.6.2)."
] | 1,556 | 1,556 | 1,556 | NONE | null | I just tried to run `run_squad.py` example and I got this error:
```
Traceback (most recent call last):
File "run_squad.py", line 37, in <module>
from pytorch_pretrained_bert.file_utils import PYTORCH_PRETRAINED_BERT_CACHE, WEIGHTS_NAME, CONFIG_NAME
ImportError: cannot import name 'WEIGHTS_NAME' from 'pytorch_pretrained_bert.file_utils' (/mnt/Data/miniconda3/lib/python3.7/site-packages/pytorch_pretrained_bert/file_utils.py)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/523/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/523/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/522 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/522/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/522/comments | https://api.github.com/repos/huggingface/transformers/issues/522/events | https://github.com/huggingface/transformers/issues/522 | 436,137,071 | MDU6SXNzdWU0MzYxMzcwNzE= | 522 | extending of Transformer-XL for new tasks | {
"login": "cherepanovic",
"id": 10064548,
"node_id": "MDQ6VXNlcjEwMDY0NTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/10064548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cherepanovic",
"html_url": "https://github.com/cherepanovic",
"followers_url": "https://api.github.com/users/cherepanovic/followers",
"following_url": "https://api.github.com/users/cherepanovic/following{/other_user}",
"gists_url": "https://api.github.com/users/cherepanovic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cherepanovic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cherepanovic/subscriptions",
"organizations_url": "https://api.github.com/users/cherepanovic/orgs",
"repos_url": "https://api.github.com/users/cherepanovic/repos",
"events_url": "https://api.github.com/users/cherepanovic/events{/privacy}",
"received_events_url": "https://api.github.com/users/cherepanovic/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,561 | 1,561 | NONE | null | Hello community,
I am looking for an example which could help me to extend the Transformer XL to a model similar to bert-as-service model [1]. I would like to know how to set up new layers on the pretrained Tranformer XL and train the last new layers or the whole model. Could anyone give me an advice regarding this issue. Thank a lot
[1] - https://github.com/hanxiao/bert-as-service | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/522/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/521 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/521/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/521/comments | https://api.github.com/repos/huggingface/transformers/issues/521/events | https://github.com/huggingface/transformers/pull/521 | 436,134,689 | MDExOlB1bGxSZXF1ZXN0MjcyNjg4ODQ2 | 521 | Model type in convert_tf_checkpoint_to_pytorch and 'squad' mapping | {
"login": "mhardalov",
"id": 4447846,
"node_id": "MDQ6VXNlcjQ0NDc4NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4447846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mhardalov",
"html_url": "https://github.com/mhardalov",
"followers_url": "https://api.github.com/users/mhardalov/followers",
"following_url": "https://api.github.com/users/mhardalov/following{/other_user}",
"gists_url": "https://api.github.com/users/mhardalov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mhardalov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mhardalov/subscriptions",
"organizations_url": "https://api.github.com/users/mhardalov/orgs",
"repos_url": "https://api.github.com/users/mhardalov/repos",
"events_url": "https://api.github.com/users/mhardalov/events{/privacy}",
"received_events_url": "https://api.github.com/users/mhardalov/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/521?src=pr&el=h1) Report\n> Merging [#521](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/521?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/80684f6f86c13a89fc1e4feac248ef96b013765c?src=pr&el=desc) will **decrease** coverage by `0.2%`.\n> The diff coverage is `18.75%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/521?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #521 +/- ##\n==========================================\n- Coverage 67.19% 66.99% -0.21% \n==========================================\n Files 18 18 \n Lines 3847 3869 +22 \n==========================================\n+ Hits 2585 2592 +7 \n- Misses 1262 1277 +15\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/521?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...retrained\\_bert/convert\\_tf\\_checkpoint\\_to\\_pytorch.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/521/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvY29udmVydF90Zl9jaGVja3BvaW50X3RvX3B5dG9yY2gucHk=) | `0% <0%> (ø)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/modeling.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/521/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmcucHk=) | `86.22% <24%> (-2.35%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/521/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `83.51% <0%> (+1.06%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/521?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/521?src=pr&el=footer). Last update [80684f6...4a638d1](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/521?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/521?src=pr&el=h1) Report\n> Merging [#521](https://codecov.io/gh/huggingface/pytorch-transformers/pull/521?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/f2a3eb987e1fc2c85320fc3849c67811f5736b50?src=pr&el=desc) will **decrease** coverage by `0.18%`.\n> The diff coverage is `20%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/521?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #521 +/- ##\n==========================================\n- Coverage 79.04% 78.85% -0.19% \n==========================================\n Files 34 34 \n Lines 6242 6262 +20 \n==========================================\n+ Hits 4934 4938 +4 \n- Misses 1308 1324 +16\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/521?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/521/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYmVydC5weQ==) | `85.44% <20%> (-2.54%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/521?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/521?src=pr&el=footer). Last update [f2a3eb9...8e04e9e](https://codecov.io/gh/huggingface/pytorch-transformers/pull/521?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@thomwolf Is this PR still useful? Can it be somehow improved and later merged, or it should be closed?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,556 | 1,570 | 1,570 | NONE | null | Issue #438 still exists if you choose to use something else rather then BertForTokenClassification. Furthermore, you still need to edit the code before running the convertor. Lastly, BertForTokenClassification is not the same as BertForQuestionAnswering, since the latter omits the dropout before the output layer.
Maybe it's better to add more options like 'classification' which uses BertForTokenClassification.
Tested the changes on fine-tuned BERT model on SQuAD 1.1 with Google's original Tensorflow script run_squad.py initialized with multi_cased_L-12_H-768_A-12. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/521/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/521",
"html_url": "https://github.com/huggingface/transformers/pull/521",
"diff_url": "https://github.com/huggingface/transformers/pull/521.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/521.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/520 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/520/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/520/comments | https://api.github.com/repos/huggingface/transformers/issues/520/events | https://github.com/huggingface/transformers/issues/520 | 436,117,718 | MDU6SXNzdWU0MzYxMTc3MTg= | 520 | unable to load finetuned LM "No file bert_config.json" | {
"login": "omerarshad",
"id": 16164105,
"node_id": "MDQ6VXNlcjE2MTY0MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/16164105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omerarshad",
"html_url": "https://github.com/omerarshad",
"followers_url": "https://api.github.com/users/omerarshad/followers",
"following_url": "https://api.github.com/users/omerarshad/following{/other_user}",
"gists_url": "https://api.github.com/users/omerarshad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omerarshad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omerarshad/subscriptions",
"organizations_url": "https://api.github.com/users/omerarshad/orgs",
"repos_url": "https://api.github.com/users/omerarshad/repos",
"events_url": "https://api.github.com/users/omerarshad/events{/privacy}",
"received_events_url": "https://api.github.com/users/omerarshad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok, this should be fixed in the new release v0.6.2. See #523."
] | 1,556 | 1,556 | 1,556 | NONE | null | No such file or directory: 'LM_Trained/bert_config.json'
I think bert_config is not saved when finetuning a LM | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/520/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/519 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/519/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/519/comments | https://api.github.com/repos/huggingface/transformers/issues/519/events | https://github.com/huggingface/transformers/issues/519 | 436,109,381 | MDU6SXNzdWU0MzYxMDkzODE= | 519 | No GPT2 model | {
"login": "lcswillems",
"id": 5437552,
"node_id": "MDQ6VXNlcjU0Mzc1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5437552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lcswillems",
"html_url": "https://github.com/lcswillems",
"followers_url": "https://api.github.com/users/lcswillems/followers",
"following_url": "https://api.github.com/users/lcswillems/following{/other_user}",
"gists_url": "https://api.github.com/users/lcswillems/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lcswillems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lcswillems/subscriptions",
"organizations_url": "https://api.github.com/users/lcswillems/orgs",
"repos_url": "https://api.github.com/users/lcswillems/repos",
"events_url": "https://api.github.com/users/lcswillems/events{/privacy}",
"received_events_url": "https://api.github.com/users/lcswillems/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
}
] | closed | false | null | [] | [
"Do you have a working internet connection?\r\nWe should probably improve the error messages here, 2 different error are bundled in this error (no internet connection and wrong model name)",
"Yes, I have an internet connection. I am able to download the other models.",
"Oh wait, you are mixing two models here.\r\nGPT-2 and BERT are two different architectures.\r\nIf you want to use GPT-2 do:\r\n```\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\nmodel = GPT2Model.from_pretrained('gpt2')\r\n```\r\nAn example of usage is [here in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#openai-gpt-2)\r\n",
"Okay, thank you! Sorry, for this..."
] | 1,556 | 1,556 | 1,556 | NONE | null | I tried to load the `gpt2` model listed in the README.md, but I got this error:
```
Model name 'gpt2' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'gpt2' was a path or url but couldn't find any file associated to this path or url.
```
The code I used:
```
# Load pre-trained model tokenizer (vocabulary)
tokenizer = BertTokenizer.from_pretrained('gpt2')
# Load pre-trained model (weights)
model = BertModel.from_pretrained('gpt2')
_ = model.eval()
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/519/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/519/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/518 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/518/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/518/comments | https://api.github.com/repos/huggingface/transformers/issues/518/events | https://github.com/huggingface/transformers/pull/518 | 436,083,794 | MDExOlB1bGxSZXF1ZXN0MjcyNjQ4Mjkz | 518 | Fix training schedules in examples to match new API | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@lukovnikov do you want to give this PR a look and confirm it's fine?\r\n\r\nAlso, we should document a bit the new optimizer API in the README. Do you want to use this PR to copy a few docstring in the README (we currently don't have auto-generated doc)?",
"Hi. Sorry, forgot about the examples.\r\nDid a couple fixes in my 'schedules_in_examples' branch (see PR #531).\r\nHowever, I don't have the fp16 setup yet so wasn't able to run the examples to be completely sure.\r\nDocs update is here: PR #533.",
"Got it.\r\nOk to merge this PR @lukovnikov?",
"With the fixes from #531, should be good.",
"Thanks!"
] | 1,556 | 1,556 | 1,556 | MEMBER | null | Re #445:
- update examples to work with the new optimizer API | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/518/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/518",
"html_url": "https://github.com/huggingface/transformers/pull/518",
"diff_url": "https://github.com/huggingface/transformers/pull/518.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/518.patch",
"merged_at": 1556218864000
} |
https://api.github.com/repos/huggingface/transformers/issues/517 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/517/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/517/comments | https://api.github.com/repos/huggingface/transformers/issues/517/events | https://github.com/huggingface/transformers/issues/517 | 435,986,221 | MDU6SXNzdWU0MzU5ODYyMjE= | 517 | More SEPs | {
"login": "shawnkx",
"id": 15963237,
"node_id": "MDQ6VXNlcjE1OTYzMjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/15963237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shawnkx",
"html_url": "https://github.com/shawnkx",
"followers_url": "https://api.github.com/users/shawnkx/followers",
"following_url": "https://api.github.com/users/shawnkx/following{/other_user}",
"gists_url": "https://api.github.com/users/shawnkx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shawnkx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shawnkx/subscriptions",
"organizations_url": "https://api.github.com/users/shawnkx/orgs",
"repos_url": "https://api.github.com/users/shawnkx/repos",
"events_url": "https://api.github.com/users/shawnkx/events{/privacy}",
"received_events_url": "https://api.github.com/users/shawnkx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, only two segment labels are pre-trained in BERT.\r\nYou could fine-tune a new vocabulary token but we don't have a script to do that currently so you would have to modify the vocabulary and model.\r\nGPT and GPT-2 have option to do that where you can take inspiration from.\r\nI'm happy to welcome a PR on this if somebody feels like giving it a try.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,555 | 1,561 | 1,561 | NONE | null | I want to segment input sentences in more segments, like [CLS]S1[SEP]S2[SEP]S3[SEP]. Therefore, when I convert example to features, I do the following.
`segment_ids = [0] * len(tokens_s1)`
`segment_ids += [1] * len(tokens_s2)`
`segment_ids += [2] * len(tokens_s2)`
but I got the following error when I run the `self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)`:
> File "/home/xiangk/anaconda2/envs/pytorch0.4/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 1065, in forward
sequence_output, _ = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
File "/home/xiangk/anaconda2/envs/pytorch0.4/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiangk/anaconda2/envs/pytorch0.4/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 712, in forward
embedding_output = self.embeddings(input_ids, token_type_ids)
File "/home/xiangk/anaconda2/envs/pytorch0.4/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiangk/anaconda2/envs/pytorch0.4/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 264, in forward
embeddings = self.dropout(embeddings)
File "/home/xiangk/anaconda2/envs/pytorch0.4/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/home/xiangk/anaconda2/envs/pytorch0.4/lib/python3.6/site-packages/torch/nn/modules/dropout.py", line 53, in forward
return F.dropout(input, self.p, self.training, self.inplace)
File "/home/xiangk/anaconda2/envs/pytorch0.4/lib/python3.6/site-packages/torch/nn/functional.py", line 595, in dropout
return _functions.dropout.Dropout.apply(input, p, training, inplace)
File "/home/xiangk/anaconda2/envs/pytorch0.4/lib/python3.6/site-packages/torch/nn/_functions/dropout.py", line 40, in forward
ctx.noise.bernoulli_(1 - ctx.p).div_(1 - ctx.p)
RuntimeError: Creating MTGP constants failed. at /opt/conda/conda-bld/pytorch_1535491974311/work/aten/src/THC/THCTensorRandom.cu:34
after changing `segment_ids += [2] * len(tokens_s2)` to `segment_ids += [1] * len(tokens_s2)`, everything seems work but that is not what I want. Any suggestions? Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/517/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/516 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/516/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/516/comments | https://api.github.com/repos/huggingface/transformers/issues/516/events | https://github.com/huggingface/transformers/issues/516 | 435,806,713 | MDU6SXNzdWU0MzU4MDY3MTM= | 516 | Same loss values but different eval result | {
"login": "a-maci",
"id": 23125439,
"node_id": "MDQ6VXNlcjIzMTI1NDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/23125439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/a-maci",
"html_url": "https://github.com/a-maci",
"followers_url": "https://api.github.com/users/a-maci/followers",
"following_url": "https://api.github.com/users/a-maci/following{/other_user}",
"gists_url": "https://api.github.com/users/a-maci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/a-maci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/a-maci/subscriptions",
"organizations_url": "https://api.github.com/users/a-maci/orgs",
"repos_url": "https://api.github.com/users/a-maci/repos",
"events_url": "https://api.github.com/users/a-maci/events{/privacy}",
"received_events_url": "https://api.github.com/users/a-maci/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I have never tried Int8 in PyTorch.\r\nCan you share some code so we can have a look?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,555 | 1,561 | 1,561 | NONE | null | I am experimenting with low-precision on the pre-trained BERT for SQuAD scenario.
I am seeing a strange issue: the loss value when fine-tuning the model with FP16 is very similar to the loss value when fine-tuning the model at Int8. However, the eval results are are quite different -- with Int8, the results are quite bad (f1 = 3) compared to f1=88 with FP16.
Any idea what is going on and suggestions for debugging? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/516/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/515 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/515/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/515/comments | https://api.github.com/repos/huggingface/transformers/issues/515/events | https://github.com/huggingface/transformers/pull/515 | 435,719,517 | MDExOlB1bGxSZXF1ZXN0MjcyMzY3ODIy | 515 | Fix --reduce_memory in finetune_on_pregenerated | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Good catch!"
] | 1,555 | 1,556 | 1,556 | MEMBER | null | On reviewing the code I realized the --reduce_memory code path in `finetune_on_pregenerated.py` had a bug, but also wasn't getting used because the relevant argument wasn't getting passed correctly. The bugs have been fixed and the argument is now passed correctly. Performance still seems good, so now it should be possible to train without loading the whole epoch of training data into memory. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/515/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/515",
"html_url": "https://github.com/huggingface/transformers/pull/515",
"diff_url": "https://github.com/huggingface/transformers/pull/515.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/515.patch",
"merged_at": 1556008223000
} |
https://api.github.com/repos/huggingface/transformers/issues/514 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/514/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/514/comments | https://api.github.com/repos/huggingface/transformers/issues/514/events | https://github.com/huggingface/transformers/issues/514 | 435,672,972 | MDU6SXNzdWU0MzU2NzI5NzI= | 514 | ADD ERNIE | {
"login": "nghuyong",
"id": 16462374,
"node_id": "MDQ6VXNlcjE2NDYyMzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/16462374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nghuyong",
"html_url": "https://github.com/nghuyong",
"followers_url": "https://api.github.com/users/nghuyong/followers",
"following_url": "https://api.github.com/users/nghuyong/following{/other_user}",
"gists_url": "https://api.github.com/users/nghuyong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nghuyong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nghuyong/subscriptions",
"organizations_url": "https://api.github.com/users/nghuyong/orgs",
"repos_url": "https://api.github.com/users/nghuyong/repos",
"events_url": "https://api.github.com/users/nghuyong/events{/privacy}",
"received_events_url": "https://api.github.com/users/nghuyong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
}
] | closed | false | null | [] | [
"Hi @nghuyong, I won't convert ERNIE but I'm open to welcome a PR if somebody want to give it a try.\r\n\r\nAlso, note that unlike examples, a PR with a new model should have a configuration class, tests, a conversion script and be documented like the other models in the library.\r\n",
"I do implement that converting ERNIE to huggingface's format\r\nThe address is https://github.com/nghuyong/ERNIE-Pytorch\r\nWelcome to use and open issue if have problems"
] | 1,555 | 1,557 | 1,557 | CONTRIBUTOR | null | Can we add a new model ERNIE?
ERNIE is based on the Bert model and has better performance on Chinese NLP tasks.
Github address: https://github.com/PaddlePaddle/LARK/tree/develop/ERNIE
paper: https://arxiv.org/abs/1904.09223
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/514/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/513 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/513/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/513/comments | https://api.github.com/repos/huggingface/transformers/issues/513/events | https://github.com/huggingface/transformers/issues/513 | 435,620,361 | MDU6SXNzdWU0MzU2MjAzNjE= | 513 | How many epochs are necessary for finetuning BERT? | {
"login": "search4mahesh",
"id": 4182331,
"node_id": "MDQ6VXNlcjQxODIzMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4182331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/search4mahesh",
"html_url": "https://github.com/search4mahesh",
"followers_url": "https://api.github.com/users/search4mahesh/followers",
"following_url": "https://api.github.com/users/search4mahesh/following{/other_user}",
"gists_url": "https://api.github.com/users/search4mahesh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/search4mahesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/search4mahesh/subscriptions",
"organizations_url": "https://api.github.com/users/search4mahesh/orgs",
"repos_url": "https://api.github.com/users/search4mahesh/repos",
"events_url": "https://api.github.com/users/search4mahesh/events{/privacy}",
"received_events_url": "https://api.github.com/users/search4mahesh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I have tried to finetune GPT rather than BERT. An appropriate running epochs is **3** in the generation setting, including learning on embedding of some custom special tokens. Hope it help you :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,555 | 1,561 | 1,561 | NONE | null | Hi,
Could somebody provide some insights on how many epochs are necessary for finetuning bert model?
Google BERT has 100000 steps.(total_data/batch_size)
flags.DEFINE_integer("num_train_steps", 100000, "Number of training steps.")
Thanks
Mahesh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/513/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/512 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/512/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/512/comments | https://api.github.com/repos/huggingface/transformers/issues/512/events | https://github.com/huggingface/transformers/pull/512 | 435,529,853 | MDExOlB1bGxSZXF1ZXN0MjcyMjI5Mjg5 | 512 | Fix indentation weirdness in GPT-2 example. | {
"login": "cynthia",
"id": 43924,
"node_id": "MDQ6VXNlcjQzOTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/43924?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cynthia",
"html_url": "https://github.com/cynthia",
"followers_url": "https://api.github.com/users/cynthia/followers",
"following_url": "https://api.github.com/users/cynthia/following{/other_user}",
"gists_url": "https://api.github.com/users/cynthia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cynthia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cynthia/subscriptions",
"organizations_url": "https://api.github.com/users/cynthia/orgs",
"repos_url": "https://api.github.com/users/cynthia/repos",
"events_url": "https://api.github.com/users/cynthia/events{/privacy}",
"received_events_url": "https://api.github.com/users/cynthia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @cynthia!"
] | 1,555 | 1,556 | 1,556 | CONTRIBUTOR | null | Minor patch, not sure how it originally managed to sneak in in the first place. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/512/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/512",
"html_url": "https://github.com/huggingface/transformers/pull/512",
"diff_url": "https://github.com/huggingface/transformers/pull/512.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/512.patch",
"merged_at": 1556008142000
} |
https://api.github.com/repos/huggingface/transformers/issues/511 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/511/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/511/comments | https://api.github.com/repos/huggingface/transformers/issues/511/events | https://github.com/huggingface/transformers/issues/511 | 435,509,991 | MDU6SXNzdWU0MzU1MDk5OTE= | 511 | error when trying to use multilingual model for fine tuning | {
"login": "KavyaGujjala",
"id": 28920687,
"node_id": "MDQ6VXNlcjI4OTIwNjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/28920687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KavyaGujjala",
"html_url": "https://github.com/KavyaGujjala",
"followers_url": "https://api.github.com/users/KavyaGujjala/followers",
"following_url": "https://api.github.com/users/KavyaGujjala/following{/other_user}",
"gists_url": "https://api.github.com/users/KavyaGujjala/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KavyaGujjala/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KavyaGujjala/subscriptions",
"organizations_url": "https://api.github.com/users/KavyaGujjala/orgs",
"repos_url": "https://api.github.com/users/KavyaGujjala/repos",
"events_url": "https://api.github.com/users/KavyaGujjala/events{/privacy}",
"received_events_url": "https://api.github.com/users/KavyaGujjala/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I made changes in the code pregenerate_training_data.py\r\nfrom \r\n```\r\nparser.add_argument(\"--bert_model\", type=str, required=True,\r\n choices=[\"bert-base-uncased\", \"bert-large-uncased\", \"bert-base-cased\",\r\n \"bert-base-multilingual\", \"bert-base-chinese\"])\r\n```\r\nto\r\n```\r\nparser.add_argument(\"--bert_model\", type=str, required=True,\r\n choices=[\"bert-base-uncased\", \"bert-large-uncased\", \"bert-base-cased\",\r\n \"bert-base-multilingual-cased\", \"bert-base-multilingual-uncased\", \"bert-base-chinese\"])\r\n```\r\n\r\n\r\nand it worked.",
"It occured to me maybe because I forgot to install pytorch. I installed pytorch then it's solved.",
"Hi, \r\nI followed your code, and got this error:\r\n\r\nTraceback (most recent call last): | 6796/185072 [00:00<00:18, 9787.42it/s]\r\n File \"pregenerate_training_data.py\", line 308, in <module>\r\n main()\r\n File \"pregenerate_training_data.py\", line 293, in main\r\n vocab_list=vocab_list)\r\n File \"pregenerate_training_data.py\", line 208, in create_instances_from_document\r\n assert len(tokens_b) >= 1\r\nAssertionError\r\n\r\nCan you please share your code?",
"What computer specification to train your corpus? How big it is and how long you need to training your corpus?\r\n\r\nI wanna too train my corpus using fine tuning, maybe your answers give me an insight about how relevant me to training the corpus, thanks",
"> Hi,\r\n> I followed your code, and got this error:\r\n> \r\n> Traceback (most recent call last): | 6796/185072 [00:00<00:18, 9787.42it/s]\r\n> File \"pregenerate_training_data.py\", line 308, in \r\n> main()\r\n> File \"pregenerate_training_data.py\", line 293, in main\r\n> vocab_list=vocab_list)\r\n> File \"pregenerate_training_data.py\", line 208, in create_instances_from_document\r\n> assert len(tokens_b) >= 1\r\n> AssertionError\r\n> \r\n> Can you please share your code?\r\n\r\nI run into the same problem. Wondering if you have solved your problem. Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,555 | 1,568 | 1,568 | NONE | null | I wanted to use fine tuning for hindi language data. For that I tried to give bert-base-mutlilingual model but I am getting the following error
> python pregenerate_training_data.py --train_corpus=./hindi_pytorch_bert_data_1.txt --bert_model=bert-base-multilingual --output_dir=./hindi_train_data_1_3epochs/ --epochs_to_generate=3
```
Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.
Model name 'bert-base-multilingual' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'bert-base-multilingual' was a path or url but couldn't find any file associated to this path or url.
Traceback (most recent call last):
File "pregenerate_training_data.py", line 292, in <module>
main()
File "pregenerate_training_data.py", line 255, in main
vocab_list = list(tokenizer.vocab.keys())
AttributeError: 'NoneType' object has no attribute 'vocab'
```
I tried giving bert-base-multilingual-cased as well then I ran into this error
> python pregenerate_training_data.py --train_corpus=./hindi_pytorch_bert_data_1.txt --bert_model=bert-base-multilingual-cased --output_dir=./hindi_train_data_1_3epochs/ --epochs_to_generate=3
```
Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.
usage: pregenerate_training_data.py [-h] --train_corpus TRAIN_CORPUS
--output_dir OUTPUT_DIR --bert_model
{bert-base-uncased,bert-large-uncased,bert-base-cased,bert-base-multilingual,bert-base-chinese}
[--do_lower_case] [--reduce_memory]
[--epochs_to_generate EPOCHS_TO_GENERATE]
[--max_seq_len MAX_SEQ_LEN]
[--short_seq_prob SHORT_SEQ_PROB]
[--masked_lm_prob MASKED_LM_PROB]
[--max_predictions_per_seq MAX_PREDICTIONS_PER_SEQ]
pregenerate_training_data.py: error: argument --bert_model: invalid choice: 'bert-base-multilingual-cased' (choose from 'bert-base-uncased', 'bert-large-uncased', 'bert-base-cased', 'bert-base-multilingual', 'bert-base-chinese')
```
How to resolve this issue? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/511/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/510 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/510/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/510/comments | https://api.github.com/repos/huggingface/transformers/issues/510/events | https://github.com/huggingface/transformers/issues/510 | 435,454,324 | MDU6SXNzdWU0MzU0NTQzMjQ= | 510 | Adam optimiser not following Pytorch conventions | {
"login": "tonianelope",
"id": 23743176,
"node_id": "MDQ6VXNlcjIzNzQzMTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/23743176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tonianelope",
"html_url": "https://github.com/tonianelope",
"followers_url": "https://api.github.com/users/tonianelope/followers",
"following_url": "https://api.github.com/users/tonianelope/following{/other_user}",
"gists_url": "https://api.github.com/users/tonianelope/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tonianelope/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tonianelope/subscriptions",
"organizations_url": "https://api.github.com/users/tonianelope/orgs",
"repos_url": "https://api.github.com/users/tonianelope/repos",
"events_url": "https://api.github.com/users/tonianelope/events{/privacy}",
"received_events_url": "https://api.github.com/users/tonianelope/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"We could update that indeed, that's just a relic of the Tensorflow conversion.\r\nDo you want to submit a PR? Otherwise I'll do it when I work on the next release.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,555 | 1,561 | 1,561 | CONTRIBUTOR | null | Both [BertAdam](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/optimization.py) and [OpenAIAdam](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/optimization_openai.py) don't follow the pytroch convetion to define the `betas` parameter for [Adam Optimisers](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam) as a tuple, but instead has parameters `b1` and `b2`.
Pytorch based libraries like fastai expect the optimizer `betas` to be a tuple.
Any reason `b1/2` is used instead of a tuple? Would be great to change so the optimisers can integrate with other pytorch libraries.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/510/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.