url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/409 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/409/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/409/comments | https://api.github.com/repos/huggingface/transformers/issues/409/events | https://github.com/huggingface/transformers/pull/409 | 425,405,711 | MDExOlB1bGxSZXF1ZXN0MjY0NTA5NzI5 | 409 | Remove padding_idx from position_embeddings and token_type_embeddings | {
"login": "ikuyamada",
"id": 426342,
"node_id": "MDQ6VXNlcjQyNjM0Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/426342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ikuyamada",
"html_url": "https://github.com/ikuyamada",
"followers_url": "https://api.github.com/users/ikuyamada/followers",
"following_url": "https://api.github.com/users/ikuyamada/following{/other_user}",
"gists_url": "https://api.github.com/users/ikuyamada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ikuyamada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ikuyamada/subscriptions",
"organizations_url": "https://api.github.com/users/ikuyamada/orgs",
"repos_url": "https://api.github.com/users/ikuyamada/repos",
"events_url": "https://api.github.com/users/ikuyamada/events{/privacy}",
"received_events_url": "https://api.github.com/users/ikuyamada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, thanks @ikuyamada!"
] | 1,553 | 1,553 | 1,553 | CONTRIBUTOR | null | Because embedding vectors at 0th position of `position_embeddings` and `token_type_embeddings` have roles in the model (i.e., representing the first token and the token in the first sentence), these vectors should not be treated as padding vectors. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/409/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/409",
"html_url": "https://github.com/huggingface/transformers/pull/409",
"diff_url": "https://github.com/huggingface/transformers/pull/409.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/409.patch",
"merged_at": 1553686204000
} |
https://api.github.com/repos/huggingface/transformers/issues/408 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/408/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/408/comments | https://api.github.com/repos/huggingface/transformers/issues/408/events | https://github.com/huggingface/transformers/issues/408 | 425,298,071 | MDU6SXNzdWU0MjUyOTgwNzE= | 408 | slow training speed even 20 steps | {
"login": "KavyaGujjala",
"id": 28920687,
"node_id": "MDQ6VXNlcjI4OTIwNjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/28920687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KavyaGujjala",
"html_url": "https://github.com/KavyaGujjala",
"followers_url": "https://api.github.com/users/KavyaGujjala/followers",
"following_url": "https://api.github.com/users/KavyaGujjala/following{/other_user}",
"gists_url": "https://api.github.com/users/KavyaGujjala/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KavyaGujjala/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KavyaGujjala/subscriptions",
"organizations_url": "https://api.github.com/users/KavyaGujjala/orgs",
"repos_url": "https://api.github.com/users/KavyaGujjala/repos",
"events_url": "https://api.github.com/users/KavyaGujjala/events{/privacy}",
"received_events_url": "https://api.github.com/users/KavyaGujjala/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"We have new lm_fintetuning scripts thanks to @Rocketknight1 PR #392.\r\nMaybe you can try these ones.\r\nThey are in the `./examples/lm_finetuning/` folder.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,553 | 1,559 | 1,559 | NONE | null | Hi,
I am running run_lm_finetuning.py code on a 1 million sentences .
It's taking more than 20 hours even for each epoch.
where as run_pretraining.py code from google-research/bert takes very less time.
What can be the reason?
How to resolve this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/408/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/407 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/407/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/407/comments | https://api.github.com/repos/huggingface/transformers/issues/407/events | https://github.com/huggingface/transformers/issues/407 | 425,161,342 | MDU6SXNzdWU0MjUxNjEzNDI= | 407 | AllenNLP TransformerXL | {
"login": "DataDaveH",
"id": 25015275,
"node_id": "MDQ6VXNlcjI1MDE1Mjc1",
"avatar_url": "https://avatars.githubusercontent.com/u/25015275?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DataDaveH",
"html_url": "https://github.com/DataDaveH",
"followers_url": "https://api.github.com/users/DataDaveH/followers",
"following_url": "https://api.github.com/users/DataDaveH/following{/other_user}",
"gists_url": "https://api.github.com/users/DataDaveH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DataDaveH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DataDaveH/subscriptions",
"organizations_url": "https://api.github.com/users/DataDaveH/orgs",
"repos_url": "https://api.github.com/users/DataDaveH/repos",
"events_url": "https://api.github.com/users/DataDaveH/events{/privacy}",
"received_events_url": "https://api.github.com/users/DataDaveH/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Maybe you should ask in the AllenNLP repo as well?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,553 | 1,559 | 1,559 | NONE | null | Has anyone done any work on wrapping up TransformerXL for AllenNLP? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/407/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/406 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/406/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/406/comments | https://api.github.com/repos/huggingface/transformers/issues/406/events | https://github.com/huggingface/transformers/issues/406 | 425,045,668 | MDU6SXNzdWU0MjUwNDU2Njg= | 406 | error when trying to get embeddings after fine tuning | {
"login": "KavyaGujjala",
"id": 28920687,
"node_id": "MDQ6VXNlcjI4OTIwNjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/28920687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KavyaGujjala",
"html_url": "https://github.com/KavyaGujjala",
"followers_url": "https://api.github.com/users/KavyaGujjala/followers",
"following_url": "https://api.github.com/users/KavyaGujjala/following{/other_user}",
"gists_url": "https://api.github.com/users/KavyaGujjala/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KavyaGujjala/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KavyaGujjala/subscriptions",
"organizations_url": "https://api.github.com/users/KavyaGujjala/orgs",
"repos_url": "https://api.github.com/users/KavyaGujjala/repos",
"events_url": "https://api.github.com/users/KavyaGujjala/events{/privacy}",
"received_events_url": "https://api.github.com/users/KavyaGujjala/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I realised that the way I was loading model was wrong.\r\n\r\nI used this code\r\n\r\n ```\r\n# Save a trained model \r\n model_to_save = model.module if hasattr(model, 'module') else model # Only save the model it-self \r\n output_model_file = os.path.join(args.output_dir, \"pytorch_model.bin\") \r\n torch.save(model_to_save.state_dict(), output_model_file) \r\n \r\n # Load a trained model that you have fine-tuned \r\n model_state_dict = torch.load(output_model_file) \r\n model = BertForQuestionAnswering.from_pretrained(args.bert_model, state_dict=model_state_dict) \r\n model.to(device)\r\n```"
] | 1,553 | 1,553 | 1,553 | NONE | null | I have used run_lm_finetuning.py code on my domain specific corpus.
Now I want to use the fine tuned model to get better embeddings.
>>> import torch
>>> config = modeling.BertConfig(attention_probs_dropout_prob=0.1, hidden_dropout_prob=0.1, hidden_size=768, initializer_range=0.02, intermediate_size=3072, max_position_embeddings=512, num_attention_heads=12, num_hidden_layers=12, vocab_size_or_config_json_file=30522)
>>> model = modeling.BertEmbeddings(config)
>>> model_state_dict = "/home/cloud/pytorch_bert/pytorch-pretrained-BERT-master/examples/models/pytorch_model.bin"
>>> model.load_state_dict(torch.load(model_state_dict))
I got an error like
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/cloud/miniconda3/envs/tensorflow_gpu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 769, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for BertEmbeddings:
Missing key(s) in state_dict: "word_embeddings.weight", "position_embeddings.weight", "token_type_embeddings.weight", "LayerNorm.weight", "LayerNorm.bias".
Unexpected key(s) in state_dict: "bert.embeddings.word_embeddings.weight", "bert.embeddings.position_embeddings.weight", "bert.embeddings.token_type_embeddings.weight", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.self.query.weight", "bert.encoder.layer.0.attention.self.query.bias", "bert.encoder.layer.0.attention.self.key.weight", "bert.encoder.layer.0.attention.self.key.bias", "bert.encoder.layer.0.attention.self.value.weight", "bert.encoder.layer.0.attention.self.value.bias", "bert.encoder.layer.0.attention.output.dense.weight", "bert.encoder.layer.0.attention.output.dense.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.intermediate.dense.weight", "bert.encoder.layer.0.intermediate.dense.bias", "bert.encoder.layer.0.output.dense.weight", "bert.encoder.layer.0.output.dense.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.encoder.layer.1.attention.self.query.weight", "bert.encoder.layer.1.attention.self.query.bias", "bert.encoder.layer.1.attention.self.key.weight", "bert.encoder.layer.1.attention.self.key.bias", "bert.encoder.layer.1.attention.self.value.weight", "bert.encoder.layer.1.attention.self.value.bias", "bert.encoder.layer.1.attention.output.dense.weight", "bert.encoder.layer.1.attention.output.dense.bias", "bert.encoder.layer.1.attention.output.LayerNorm.weight", "bert.encoder.layer.1.attention.output.LayerNorm.bias", "bert.encoder.layer.1.intermediate.dense.weight", "bert.encoder.layer.1.intermediate.dense.bias", "bert.encoder.layer.1.output.dense.weight", "bert.encoder.layer.1.output.dense.bias", "bert.encoder.layer.1.output.LayerNorm.weight", "bert.encoder.layer.1.output.LayerNorm.bias", "bert.encoder.layer.2.attention.self.query.weight", "bert.encoder.layer.2.attention.self.query.bias", "bert.encoder.layer.2.attention.self.key.weight", "bert.encoder.layer.2.attention.self.key.bias", "bert.encoder.layer.2.attention.self.value.weight", "bert.encoder.layer.2.attention.self.value.bias", "bert.encoder.layer.2.attention.output.dense.weight", "bert.encoder.layer.2.attention.output.dense.bias", "bert.encoder.layer.2.attention.output.LayerNorm.weight", "bert.encoder.layer.2.attention.output.LayerNorm.bias", "bert.encoder.layer.2.intermediate.dense.weight", "bert.encoder.layer.2.intermediate.dense.bias", "bert.encoder.layer.2.output.dense.weight", "bert.encoder.layer.2.output.dense.bias", "bert.encoder.layer.2.output.LayerNorm.weight", "bert.encoder.layer.2.output.LayerNorm.bias", "bert.encoder.layer.3.attention.self.query.weight", "bert.encoder.layer.3.attention.self.query.bias", "bert.encoder.layer.3.attention.self.key.weight", "bert.encoder.layer.3.attention.self.key.bias", "bert.encoder.layer.3.attention.self.value.weight", "bert.encoder.layer.3.attention.self.value.bias", "bert.encoder.layer.3.attention.output.dense.weight", "bert.encoder.layer.3.attention.output.dense.bias", "bert.encoder.layer.3.attention.output.LayerNorm.weight", "bert.encoder.layer.3.attention.output.LayerNorm.bias", "bert.encoder.layer.3.intermediate.dense.weight", "bert.encoder.layer.3.intermediate.dense.bias", "bert.encoder.layer.3.output.dense.weight", "bert.encoder.layer.3.output.dense.bias", "bert.encoder.layer.3.output.LayerNorm.weight", "bert.encoder.layer.3.output.LayerNorm.bias", "bert.encoder.layer.4.attention.self.query.weight", "bert.encoder.layer.4.attention.self.query.bias", "bert.encoder.layer.4.attention.self.key.weight", "bert.encoder.layer.4.attention.self.key.bias", "bert.encoder.layer.4.attention.self.value.weight", "bert.encoder.layer.4.attention.self.value.bias", "bert.encoder.layer.4.attention.output.dense.weight", "bert.encoder.layer.4.attention.output.dense.bias", "bert.encoder.layer.4.attention.output.LayerNorm.weight", "bert.encoder.layer.4.attention.output.LayerNorm.bias", "bert.encoder.layer.4.intermediate.dense.weight", "bert.encoder.layer.4.intermediate.dense.bias", "bert.encoder.layer.4.output.dense.weight", "bert.encoder.layer.4.output.dense.bias", "bert.encoder.layer.4.output.LayerNorm.weight", "bert.encoder.layer.4.output.LayerNorm.bias", "bert.encoder.layer.5.attention.self.query.weight", "bert.encoder.layer.5.attention.self.query.bias", "bert.encoder.layer.5.attention.self.key.weight", "bert.encoder.layer.5.attention.self.key.bias", "bert.encoder.layer.5.attention.self.value.weight", "bert.encoder.layer.5.attention.self.value.bias", "bert.encoder.layer.5.attention.output.dense.weight", "bert.encoder.layer.5.attention.output.dense.bias", "bert.encoder.layer.5.attention.output.LayerNorm.weight", "bert.encoder.layer.5.attention.output.LayerNorm.bias", "bert.encoder.layer.5.intermediate.dense.weight", "bert.encoder.layer.5.intermediate.dense.bias", "bert.encoder.layer.5.output.dense.weight", "bert.encoder.layer.5.output.dense.bias", "bert.encoder.layer.5.output.LayerNorm.weight", "bert.encoder.layer.5.output.LayerNorm.bias", "bert.encoder.layer.6.attention.self.query.weight", "bert.encoder.layer.6.attention.self.query.bias", "bert.encoder.layer.6.attention.self.key.weight", "bert.encoder.layer.6.attention.self.key.bias", "bert.encoder.layer.6.attention.self.value.weight", "bert.encoder.layer.6.attention.self.value.bias", "bert.encoder.layer.6.attention.output.dense.weight", "bert.encoder.layer.6.attention.output.dense.bias", "bert.encoder.layer.6.attention.output.LayerNorm.weight", "bert.encoder.layer.6.attention.output.LayerNorm.bias", "bert.encoder.layer.6.intermediate.dense.weight", "bert.encoder.layer.6.intermediate.dense.bias", "bert.encoder.layer.6.output.dense.weight", "bert.encoder.layer.6.output.dense.bias", "bert.encoder.layer.6.output.LayerNorm.weight", "bert.encoder.layer.6.output.LayerNorm.bias", "bert.encoder.layer.7.attention.self.query.weight", "bert.encoder.layer.7.attention.self.query.bias", "bert.encoder.layer.7.attention.self.key.weight", "bert.encoder.layer.7.attention.self.key.bias", "bert.encoder.layer.7.attention.self.value......
Any idea where I am doing it wrong? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/406/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/405 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/405/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/405/comments | https://api.github.com/repos/huggingface/transformers/issues/405/events | https://github.com/huggingface/transformers/issues/405 | 425,034,809 | MDU6SXNzdWU0MjUwMzQ4MDk= | 405 | embeddings after fine tuning | {
"login": "KavyaGujjala",
"id": 28920687,
"node_id": "MDQ6VXNlcjI4OTIwNjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/28920687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KavyaGujjala",
"html_url": "https://github.com/KavyaGujjala",
"followers_url": "https://api.github.com/users/KavyaGujjala/followers",
"following_url": "https://api.github.com/users/KavyaGujjala/following{/other_user}",
"gists_url": "https://api.github.com/users/KavyaGujjala/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KavyaGujjala/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KavyaGujjala/subscriptions",
"organizations_url": "https://api.github.com/users/KavyaGujjala/orgs",
"repos_url": "https://api.github.com/users/KavyaGujjala/repos",
"events_url": "https://api.github.com/users/KavyaGujjala/events{/privacy}",
"received_events_url": "https://api.github.com/users/KavyaGujjala/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I used this code and it worked.\r\n\r\n```\r\n output_model_file = /path/to/pytorch_mode.bin/\r\n model_state_dict = torch.load(output_model_file) \r\n model = BertModel.from_pretrained(bert_model, state_dict=model_state_dict)\r\n```",
"Hi @KavyaGujjala \r\n\r\nI was fine-tuning the 'Bert base uncased' model as per the [Google Bert Repository](https://github.com/google-research/bert) on my domain specific data. I used the existing WordPiece vocab and ran pre-training for 50000 steps on the in-domain text to learn the compositionality. I realized that the updated embeddings were improved by manual evaluation. But I really want to add my domain words to the vocab, as they carry importance for my downstream tasks.\r\n\r\nCan you please tell how did you handle the domain vocab while fine-tuning? Did you add your domain words to the vocab file? If yes, did you append the words to the vocab file or just replaced the [unusedXX] words from the file? \r\n \r\nDid you see any improvements after fine-tuning your models? How did you evaluate the embeddings quality for your domain, if not evaluated manually? \r\n",
"Hi @harmanpreet93 \r\n\r\nI haven't used fine tuning code from actual google bert repo but used the pretraining code. I used finetuning code from pytorch repo but the embeddings were getting changed everytime I loaded the model. So I am just pretraining for domain specific data.\r\n\r\nMy first question, you are doing fine tuning or pretraining?\r\n\r\nI have pretrained using bert base uncased model for 10000 steps. After that I got the sentence representation using [CLS] token from the final hidden layer. Compared the cosine similarity between sentences. I didnt get good results though. Dont know what I am missing. May be as you have mentioned I need to add words to vocab file.\r\n\r\n",
"Hi @KavyaGujjala \r\n \r\n> My first question, you are doing fine tuning or pretraining? \r\n\r\nYes, I too used pre-training code from bert repo. After pretraining on domain data for 50k steps, the embeddings got updated. \r\n\r\n> Compared the cosine similarity between sentences. I didnt get good results though. \r\n\r\n**Q:** How did you evaluate the quality of sentence embeddings? By not getting good results, do you mean that the similar sentences weren't scored at the top or the cosine scores weren't good? \r\n \r\nIn my case the similarity scores decreased, but the relevant sentences started showing up in the topk. \r\n \r\n> May be as you have mentioned I need to add words to vocab file. \r\n \r\nI'm referring to [this](https://github.com/google-research/bert/issues/9) issue from bert repo. Its suggests the following approaches: \r\n> But if you want to add more vocab you can either:\r\n(a) Just replace the \"[unusedX]\" tokens with your vocabulary. Since these were not used they are effectively randomly initialized.\r\n(b) Append it to the end of the vocab, and write a script which generates a new checkpoint that is identical to the pre-trained checkpoint, but but with a bigger vocab where the new embeddings are randomly initialized (for initialized we used tf.truncated_normal_initializer(stddev=0.02)). This will likely require mucking around with some tf.concat() and tf.assign() calls. \r\n \r\n\r\n**Q:** Did you try any of these approaches for the bert repo? I tried approach (b), by adding my domain words to the vocab file and updated the `vocab_size` as per [repo](https://github.com/google-research/bert#pre-training-with-bert). I ran into following error: \r\n`ValueError: Shape of variable bert/embeddings/word_embeddings:0 ((33297, 768)) doesn't match with shape of tensor bert/embeddings/word_embeddings ([30522, 768]) from checkpoint reader.`\r\n\r\n**Q:** How were the sentence embeddings calculated? By averaging the word embeddings or any other approach? ",
"Hi @harmanpreet93 \r\n\r\n> Yes, I too used pre-training code from bert repo. After pretraining on domain data for 50k steps, the embeddings got updated.\r\n\r\nWhat is the size of dataset you have used? My dataset has like 1 million sentences domain specific.\r\n\r\n> Q: How did you evaluate the quality of sentence embeddings? By not getting good results, do you mean that the similar sentences weren't scored at the top or the cosine scores weren't good?\r\n\r\nyeah cosine scores aren't good enough. Even dissimilar sentences are getting better similarity scores than the similar ones. Does training for more steps solve this issue?\r\n\r\n> Q: Did you try any of these approaches for the bert repo? I tried approach (b), by adding my domain words to the vocab file and updated the vocab_size as per repo. I ran into following error:\r\n> ValueError: Shape of variable bert/embeddings/word_embeddings:0 ((33297, 768)) doesn't match with shape of tensor bert/embeddings/word_embeddings ([30522, 768]) from checkpoint reader.\r\n\r\nI haven't tried adding domain specific words to vocab but are you using the pretrained model bert_config.json file along with the domain specific trained model checkpoint. If so have you changed vocab size? And did you do it the way jacobdevlin has mentioned like writing script to initialize random weights? I didnt really understand that part, I mean what exactly are we supposed to do to initialize weights.\r\nAlso why didnt you try replacing unusedX tokens?\r\n\r\n> Q: How were the sentence embeddings calculated? By averaging the word embeddings or any other approach?\r\n\r\nI used the [CLS] token embeddings from the hidden layer output ( as they have mentioned in the paper that [CLS] token can be used for sequence level classification ). I initially tried max pooling and mean pooling of word embeddings but got really bad results ( every cosine similarity score was above 0.7 ).\r\nWhich approach did you follow?\r\n",
"To add to this, in their paper they mention they get the best results by concatenating the last four layers.\r\n\r\n```python\r\n#! In your model setup\r\n# Indices of layers to concatenate\r\nself.bert_layers = [-1, -2, -3, -4]\r\nself.bert = BertModel.from_pretrained('your-checkpoint.pth')\r\n\r\n#! In your forward method\r\nall_bert_layers, _ = self.bert_layer(bert_ids, attention_mask=bert_mask)\r\nbert_concat = torch.cat(tuple([all_bert_layers[i] for i in self.bert_layers]), dim=-1)\r\n\r\n# If you use a mask:\r\n## Pooling by also setting masked items to zero\r\nbert_mask = torch.FloatTensor(bert_mask).unsqueeze(2)\r\n## Multiply output with mask to only retain non-paddding tokens\r\nbert_pooled = torch.mul(bert_concat, bert_mask)\r\n\r\n# First item ['CLS'] is sentence representation.\r\n# Use bert_concat instead of bert_pooled if you didn't use a mask\r\nfinal_bert = bert_pooled[:, 0, :]\r\n```",
"> To add to this, in their paper they mention they get the best results by concatenating the last four layers.\r\n> \r\n> ```python\r\n> #! In your model setup\r\n> # Indices of layers to concatenate\r\n> self.bert_layers = [-1, -2, -3, -4]\r\n> self.bert = BertModel.from_pretrained('your-checkpoint.pth')\r\n> \r\n> #! In your forward method\r\n> all_bert_layers, _ = self.bert_layer(bert_ids, attention_mask=bert_mask)\r\n> bert_concat = torch.cat(tuple([all_bert_layers[i] for i in self.bert_layers]), dim=-1)\r\n> \r\n> # If you use a mask:\r\n> ## Pooling by also setting masked items to zero\r\n> bert_mask = torch.FloatTensor(bert_mask).unsqueeze(2)\r\n> ## Multiply output with mask to only retain non-paddding tokens\r\n> bert_pooled = torch.mul(bert_concat, bert_mask)\r\n> \r\n> # First item ['CLS'] is sentence representation.\r\n> # Use bert_concat instead of bert_pooled if you didn't use a mask\r\n> final_bert = bert_pooled[:, 0, :]\r\n> ```\r\n\r\n@BramVanroy Thanks for the info. I am using original BERT [repo](https://github.com/google-research/bert) pretraining code and used extract_features.py code to get last four layers output for all the tokens in a sequence. By concatenating the last four layers does it mean adding all four layers embeddings of each token?",
"@KavyaGujjala If you look at my code, you can see that I mean concatenating across the hidden dim axis. So let's say the output of a single bert layer is batch_size * seq_len * hidden_dim, then concatenating the last four ones ends up with batch_size * seq_len * (hidden_dim*4).",
"Hi @harmanpreet93 ,\r\n\r\nYou ran 50k epochs for fine tuning?\r\n\r\nThanks\r\nMahesh\r\n",
"@search4mahesh Yes, I ran 50k epochs for fine-tuning!",
"@harmanpreet93 that must be lot of compute time, example only says about 3 epochs.\r\n ",
"I'm assuming that @harmanpreet93 means 50k steps.",
"> @harmanpreet93 that must be a lot of compute time, example only says about 3 epochs.\r\n\r\nYou are right @BramVanroy I meant 50k steps. ",
"@harmanpreet93 Have you solve the problem of \"ValueError: Shape of variable bert/embeddings/word_embeddings:0 ((33297, 768)) doesn't match with shape of tensor bert/embeddings/word_embeddings ([30522, 768]) from checkpoint reader.\"?",
"Hi @Firmiana1220 \r\n\r\nI was following solutions mentioned [here](https://github.com/google-research/bert/issues/9) from bert repo. It suggests the following two approaches:\r\n\r\n> But if you want to add more vocab you can either:\r\n> (a) Just replace the \"[unusedX]\" tokens with your vocabulary. Since these were not used they are effectively randomly initialized.\r\n> (b) Append it to the end of the vocab, and write a script which generates a new checkpoint that is identical to the pre-trained checkpoint, but with a bigger vocab where the new embeddings are randomly initialized (for initialized we used tf.truncated_normal_initializer(stddev=0.02)). This will likely require mucking around with some tf.concat() and tf.assign() calls.\r\n \r\nI was going for option (b). But I couldn't find any progress. Therefore, I decided to pre-train the model just on my domain data, and not leveraging the already pre-trained models. I'm still looking for a better approach. ",
"@KavyaGujjala Thanks for the code sample. Is the masking in it redundant? Unless I'm misunderstanding, you mask out all the unused tokens, but then simply grab the 'CLS' token. The masking would be required if you use all the values, but seems redundant if you're just using the sentence representation.",
"> @KavyaGujjala Thanks for the code sample. Is the masking in it redundant? Unless I'm misunderstanding, you mask out all the unused tokens, but then simply grab the 'CLS' token. The masking would be required if you use all the values, but seems redundant if you're just using the sentence representation.\r\n\r\nHi @snard6 , I didnt use that masking code as such, for now I got a finetuned model for my domain specific data and using the cls token for representation which gave comparatively better results.\r\n\r\nI used pytorch bert codes for getting a finetuned model and then extract features code to get the cls token as sentence representation.",
"> Hi @harmanpreet93 \r\nI tried the option (a), it works. I replaced the word in vocab.txt with the word in my own domain, other words are [unusedX]. If you don't change the size of vocab.txt, it works. But if your domain words are larger than 30522, maybe you should try the option (b).\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,553 | 1,563 | 1,563 | NONE | null | Hi,
I have fine tune 'bert base uncased' using run_lm_finetuning.py script on my domain specific text corpus.
I have got pytorch_model.bin file after fine tuning.
Now how to load that model and get embeddings.
And if I can do that, does the embeddings get changed because I have fine tuned?
Please can someone guide me on this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/405/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/404 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/404/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/404/comments | https://api.github.com/repos/huggingface/transformers/issues/404/events | https://github.com/huggingface/transformers/pull/404 | 424,661,498 | MDExOlB1bGxSZXF1ZXN0MjYzOTM3NDM2 | 404 | Fix Language Modeling Loss | {
"login": "CatalinVoss",
"id": 332459,
"node_id": "MDQ6VXNlcjMzMjQ1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/332459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CatalinVoss",
"html_url": "https://github.com/CatalinVoss",
"followers_url": "https://api.github.com/users/CatalinVoss/followers",
"following_url": "https://api.github.com/users/CatalinVoss/following{/other_user}",
"gists_url": "https://api.github.com/users/CatalinVoss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CatalinVoss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CatalinVoss/subscriptions",
"organizations_url": "https://api.github.com/users/CatalinVoss/orgs",
"repos_url": "https://api.github.com/users/CatalinVoss/repos",
"events_url": "https://api.github.com/users/CatalinVoss/events{/privacy}",
"received_events_url": "https://api.github.com/users/CatalinVoss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I fixed the test failure above in `472857c`. Note that this occurred because I was running torch > 1.0. They fixed the contiguous view issue in https://github.com/pytorch/pytorch/issues/3653. In my view, it would probably make sense to update torch to >1.0 and remove `contiguous()` calls where possible throughout this codebase.",
"Hi @CatalinVoss,\r\nThis looks good, thanks!\r\n\r\nWe'll try to keep backward compatibility up to pytorch 0.4.1 for the moment since quite a lot of downstream libraries use this package (like AllenNLP, fair, ParlAI...). So it's good to have the `contiguous()` calls, thanks for that.\r\n\r\nDo you want to add a simple script showcasing finetuning GPT-2 on some dataset (open question, you don't need to)?",
"OK, thanks! That all makes sense. Yes, I can add a demo finetuning script, but I probably won't get to it immediately, so probably best to do it in another PR, if that's OK.",
"Ok good to merge, thanks @CatalinVoss!"
] | 1,553 | 1,563 | 1,554 | CONTRIBUTOR | null | This fixes the language modeling loss setup for GPT and GPT-2. Minimizing the loss would previously destroy the language model within a few steps. I believe that both loss computations were incorrect, since they computed the cross-entropy without shifting the logits. Given the masking setup here, we want the ith logits to act as predictors of the (i+1)st token label (not the ith).
When I was debugging this, to be sure that there's no other issue, I checked that the logits coming out of the OpenAI tensorflow and the pytorch implementation appeared to be the same (kind of a blackbox test). We can import the original model as `tf_model` and compare:
```
# Construct input sample of two 512 token 1's
batch_size = 1
batch = [[1,2]*512]*batch_size
batch = np.array(batch)
# Attempt to seed everything
seed = 42
seed_all(seed)
np.random.seed(seed)
tf.set_random_seed(seed)
# PyTorch Implementation
model = GPT2LMHeadModel.from_pretrained('gpt2')
torch_batch = torch.tensor(batch)
loss = model(torch_batch, lm_labels=torch_batch)
print(loss.detach().numpy())
logits, presents = model(torch_batch)
print(logits.detach().numpy())
# TensorFlow implementation
with tf.Session() as sess:
hparams = tf_model.default_hparams()
hparams.override_from_dict({
"n_vocab": 50257,
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_layer": 12
})
# Set up graph and loss as I believe is correct similar to https://github.com/nshepperd/gpt-2
context = tf.placeholder(tf.int32, [batch_size, None])
output = tf_model.model(hparams=hparams, X=context)
logits = output['logits']
loss = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=context[:, 1:], logits=logits[:, :-1]))
train_vars = [v for v in tf.trainable_variables() if 'model' in v.name]
# Load checkpoint
ckpt = tf.train.latest_checkpoint('gpt_tf/models/117M')
saver = tf.train.Saver(var_list=train_vars)
saver.restore(sess, ckpt)
# Run!
tf_logits, tf_loss = sess.run((logits, loss), feed_dict={context: batch})
print(tf_loss)
print(tf_logits)
# Other differences
param_optimizer = list(model.named_parameters())
print("torch:")
print([n for n, p in param_optimizer])
print("tensorflow:")
print([v.name for v in tf.trainable_variables()])
# They look like the same 147 params...
```
*This gave us for torch:*
- Loss: `11.067907`
- Logits:
```
[[[ -32.901043 -31.20237 -34.66221 ... -39.486702 -39.87312
-32.238667]
[ -55.52076 -53.428535 -56.4767 ... -68.153885 -66.77085
-58.600616]
[ -59.22766 -58.769135 -54.14502 ... -64.58172 -65.165565
-57.34685 ]
...
[-261.01627 -245.20702 -258.9687 ... -285.7149 -292.39874
-260.91663 ]
[-256.27637 -251.34431 -242.03348 ... -280.47235 -287.56
-256.33374 ]
[-261.22495 -245.68788 -258.8527 ... -286.35617 -292.90662
-261.41626 ]]]
```
*...and for TensorFlow:*
- Loss: `0.019036641 `
- Logits:
```
[[[ -32.9011 -31.202427 -34.662262 ... -39.486755 -39.87316
-32.238716]
[ -55.52075 -53.42854 -56.476707 ... -68.153885 -66.770874
-58.60062 ]
[ -59.227722 -58.76918 -54.14507 ... -64.58176 -65.165596
-57.346878]
...
[-261.01633 -245.20723 -258.96875 ... -285.71472 -292.3988
-260.91663 ]
[-256.27637 -251.3442 -242.03352 ... -280.4723 -287.56
-256.33374 ]
[-261.22498 -245.68774 -258.85263 ... -286.3562 -292.90668
-261.41626 ]]]
```
Shifting the logits so that tokens < n predict the nth token like in the TF example above should fix this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/404/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/404",
"html_url": "https://github.com/huggingface/transformers/pull/404",
"diff_url": "https://github.com/huggingface/transformers/pull/404.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/404.patch",
"merged_at": 1554284131000
} |
https://api.github.com/repos/huggingface/transformers/issues/403 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/403/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/403/comments | https://api.github.com/repos/huggingface/transformers/issues/403/events | https://github.com/huggingface/transformers/issues/403 | 424,630,249 | MDU6SXNzdWU0MjQ2MzAyNDk= | 403 | [Question]Embedding Generate Problem | {
"login": "SupUnicorn",
"id": 29460408,
"node_id": "MDQ6VXNlcjI5NDYwNDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/29460408?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SupUnicorn",
"html_url": "https://github.com/SupUnicorn",
"followers_url": "https://api.github.com/users/SupUnicorn/followers",
"following_url": "https://api.github.com/users/SupUnicorn/following{/other_user}",
"gists_url": "https://api.github.com/users/SupUnicorn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SupUnicorn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SupUnicorn/subscriptions",
"organizations_url": "https://api.github.com/users/SupUnicorn/orgs",
"repos_url": "https://api.github.com/users/SupUnicorn/repos",
"events_url": "https://api.github.com/users/SupUnicorn/events{/privacy}",
"received_events_url": "https://api.github.com/users/SupUnicorn/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi,\r\n\r\nHow did you get the english embeddings using the bert-base-uncase model.\r\n\r\nI have fine tuned the model on domain specific corpus. I am trying to get embeddings now using the pytorch_model.bin I got after fine tuning.\r\n\r\nAny idea on how to do this?",
"Are your models in evaluation mode (`model.eval()`)?\r\nUsually reproductibility issues comes from forgetting to disable the DropOut modules using `model.eval()`.",
"> Are your models in evaluation mode (`model.eval()`)?\r\n> Usually reproductibility issues comes from forgetting to disable the DropOut modules using `model.eval()`.\r\n\r\nOh,it's really helpful.My rediculous mistake.Thank you for your answer!!!",
"@SupUnicorn Following the above comment, could you please let me know , during the evaluation while reproducing the results using a finetuned model, how to disable the DropOut modules ?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,553 | 1,564 | 1,564 | NONE | null | When i use 'bert-base-uncase' model to get english embeddings,I can get accurate and unchangeable tensor.But when I do same things on 'bert-base-chinese',I input same sequence every times,but I get different tensors.My input senquece like ''[CLS]+sequence+[SEP]" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/403/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/402 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/402/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/402/comments | https://api.github.com/repos/huggingface/transformers/issues/402/events | https://github.com/huggingface/transformers/issues/402 | 424,585,953 | MDU6SXNzdWU0MjQ1ODU5NTM= | 402 | gpt2 tokenizer issue with ValueError: chr() arg not in range(256) in Python 2.X | {
"login": "KaiQiangSong",
"id": 9112038,
"node_id": "MDQ6VXNlcjkxMTIwMzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9112038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KaiQiangSong",
"html_url": "https://github.com/KaiQiangSong",
"followers_url": "https://api.github.com/users/KaiQiangSong/followers",
"following_url": "https://api.github.com/users/KaiQiangSong/following{/other_user}",
"gists_url": "https://api.github.com/users/KaiQiangSong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KaiQiangSong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaiQiangSong/subscriptions",
"organizations_url": "https://api.github.com/users/KaiQiangSong/orgs",
"repos_url": "https://api.github.com/users/KaiQiangSong/repos",
"events_url": "https://api.github.com/users/KaiQiangSong/events{/privacy}",
"received_events_url": "https://api.github.com/users/KaiQiangSong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Indeed, I have added backward compatibility to python 2 for GPT-2.\r\nDo you want to submit a PR on this?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,553 | 1,559 | 1,559 | NONE | null | See [the code](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization_gpt2.py#L50)
Here's a solution for python 2.X
```python
@lru_cache()
def bytes_to_unicode():
"""
Returns list of utf-8 byte and a corresponding list of unicode strings.
The reversible bpe codes work on unicode strings.
This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
This is a signficant percentage of your normal, say, 32K bpe vocab.
To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
And avoids mapping to whitespace/control characters the bpe code barfs on.
"""
bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
cs = bs[:]
n = 0
for b in range(2**8):
if b not in bs:
bs.append(b)
cs.append(2**8+n)
n += 1
cs = [unichr(n) for n in cs]
return dict(zip(bs, cs)
```
And Change the encode to be
```python
def encode(self, text):
bpe_tokens = []
for token in re.findall(self.pat, text):
token = ''.join(self.byte_encoder[ord(b)] for b in token.encode('utf-8'))
bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
if len(bpe_tokens) > self.max_len:
raise ValueError(
"Token indices sequence length is longer than the specified maximum "
" sequence length for this OpenAI GPT-2 model ({} > {}). Running this"
" sequence through the model will result in indexing errors".format(len(bpe_tokens), self.max_len)
)
return bpe_tokens
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/402/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/401 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/401/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/401/comments | https://api.github.com/repos/huggingface/transformers/issues/401/events | https://github.com/huggingface/transformers/issues/401 | 424,517,333 | MDU6SXNzdWU0MjQ1MTczMzM= | 401 | How can I generate new text after having fine-tuned BERT on a custom dataset ? | {
"login": "MoMe36",
"id": 18698421,
"node_id": "MDQ6VXNlcjE4Njk4NDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/18698421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MoMe36",
"html_url": "https://github.com/MoMe36",
"followers_url": "https://api.github.com/users/MoMe36/followers",
"following_url": "https://api.github.com/users/MoMe36/following{/other_user}",
"gists_url": "https://api.github.com/users/MoMe36/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MoMe36/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MoMe36/subscriptions",
"organizations_url": "https://api.github.com/users/MoMe36/orgs",
"repos_url": "https://api.github.com/users/MoMe36/repos",
"events_url": "https://api.github.com/users/MoMe36/events{/privacy}",
"received_events_url": "https://api.github.com/users/MoMe36/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Also interested in this! ",
"Hi,\r\nIt's quite difficult to use BERT to generate text as BERT is not a causal language model per se.\r\nHere is an example: https://github.com/nyu-dl/bert-gen by @W4ngatang and @kyunghyuncho.",
"Bert was not trained for text generation since it's not trained in the classical lm setting. However there are some new approaches that doesn't rely on next word predictions in the classical lm way. Have a look at: [Insertion Transformer](https://arxiv.org/abs/1902.03249) and [Insertion-based Decoding](https://arxiv.org/abs/1902.01370).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,553 | 1,560 | 1,560 | NONE | null | Hey,
Once I've fine-tuned the Language Model, how can I get it to generate new text ? Is there any example available ?
Thanks ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/401/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/401/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/400 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/400/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/400/comments | https://api.github.com/repos/huggingface/transformers/issues/400/events | https://github.com/huggingface/transformers/issues/400 | 424,503,196 | MDU6SXNzdWU0MjQ1MDMxOTY= | 400 | how to freeze bert model and just train a classifier? | {
"login": "omerarshad",
"id": 16164105,
"node_id": "MDQ6VXNlcjE2MTY0MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/16164105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omerarshad",
"html_url": "https://github.com/omerarshad",
"followers_url": "https://api.github.com/users/omerarshad/followers",
"following_url": "https://api.github.com/users/omerarshad/following{/other_user}",
"gists_url": "https://api.github.com/users/omerarshad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omerarshad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omerarshad/subscriptions",
"organizations_url": "https://api.github.com/users/omerarshad/orgs",
"repos_url": "https://api.github.com/users/omerarshad/repos",
"events_url": "https://api.github.com/users/omerarshad/events{/privacy}",
"received_events_url": "https://api.github.com/users/omerarshad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi the BERT models are regular PyTorch models, you can just use the usual way we freeze layers in PyTorch. For example you can have a look at the [Transfer Learning tutorial of PyTorch](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html#convnet-as-fixed-feature-extractor).\r\n\r\nIn our case freezing the pretrained part of a `BertForSequenceClassification` model would look like this\r\n```python\r\nmodel = BertForSequenceClassification.from_pretrained('bert-base-uncased')\r\n\r\nfor param in model.bert.parameters():\r\n param.requires_grad = False\r\n```\r\nThen only the classification layer should be have `requires_grad=True`.",
"Wouldn't this solution freeze all layers, including the classifier layer?",
"@nuno-carneiro @thomwolf \r\nI think, this will freeze all the layers including the classifier layer. (Correct me, if I'm wrong) \r\n```\r\nmodel = BertForSequenceClassification.from_pretrained('bert-base-uncased')\r\n\r\nfor param in model.bert.parameters():\r\n param.requires_grad = False\r\n```\r\n\r\n**I think the below code will freeze only the BERT layers (Correct me, if I'm wrong)**\r\n\r\n```\r\nmodel = BertForSequenceClassification.from_pretrained('bert-base-uncased')\r\n\r\nfor param in model.bert.bert.parameters():\r\n param.requires_grad = False\r\n```",
"You are right @kalyanks0611 ",
"AttributeError: 'BertModel' object has no attribute 'bert'\r\n\r\nI got the above error when doing \r\n\r\nfor param in model.bert.bert.parameters():\r\n param.requires_grad = False\r\n\r\nAny clue? Thanks :)",
"@yhl48 Hi, please open a new issue with the problem you're facing. Thank you!",
"does anyone know how to do this for the TF 2.0 version too?",
"> AttributeError: 'BertModel' object has no attribute 'bert'\r\n> \r\n> I got the above error when doing\r\n> \r\n> for param in model.bert.bert.parameters():\r\n> param.requires_grad = False\r\n> \r\n> Any clue? Thanks :)\r\n\r\nI got the same error. This is my workaround following the one in #1431 :\r\n\r\n\tfor name, param in model.named_parameters():\r\n\t\tif 'classifier' not in name: # classifier layer\r\n\t\t\tparam.requires_grad = False",
"You're probably instantiating your model as a `BertModel` whereas @kalyanks0611's example uses `BertForSequenceClassification`.",
"> does anyone know how to do this for the TF 2.0 version too?\r\n\r\nThis is the real question",
"> does anyone know how to do this for the TF 2.0 version too?\r\n\r\nActually in tf2 framework, pre-trained is saved in `model.bert.weights`, which is a list. So you need to do:\r\n```\r\nfor w in model.bert.weights():\r\n w._trainable= False\r\n```\r\n",
"> > does anyone know how to do this for the TF 2.0 version too?\r\n> \r\n> This is the real question\r\n\r\nsee [my post](https://github.com/huggingface/transformers/issues/400#issuecomment-571354368)",
"> does anyone know how to do this for the TF 2.0 version too?\r\n\r\nmodel.bert.trainable = False\r\nIs this ok?\r\nBut I found that the training speed has not accelerated",
"> > does anyone know how to do this for the TF 2.0 version too?\r\n> \r\n> Actually in tf2 framework, pre-trained is saved in `model.bert.weights`, which is a list. So you need to do:\r\n> \r\n> ```\r\n> for w in model.bert.weights():\r\n> w._trainable= False\r\n> ```\r\n\r\nThanks! I needed a very slight amendment for it to work.\r\n\r\n```\r\nfor w in model.get_layer('tf_distil_bert_model').weights:\r\n w._trainable = False\r\n```\r\n\r\nWorks for me. And training time is halved from 9 hours to 4 hours.",
"I am trying to let bert model layer 10 and layer 11 are trainable, like below:\r\n```shell\r\nbert_model = TFBertModel.from_pretrained(\"bert-base-uncased\")\r\nfor w in bert_model.bert.weights:\r\n if w.name.find('layer_._10') == -1 and w.name.find('layer_._11') == -1:\r\n print(w.name)\r\n w._trainable = False\r\n```\r\nBut the result is as below:\r\n```shell\r\nTotal params: 109,483,776\r\nTrainable params: 109,483,776\r\nNon-trainable params: 0\r\n```\r\nHow can i change the specific layers to be trainable?\r\n\r\nThanks",
"> \r\n> \r\n> I am trying to let bert model layer 10 and layer 11 are trainable, like below:\r\n> \r\n> ```shell\r\n> bert_model = TFBertModel.from_pretrained(\"bert-base-uncased\")\r\n> for w in bert_model.bert.weights:\r\n> if w.name.find('layer_._10') == -1 and w.name.find('layer_._11') == -1:\r\n> print(w.name)\r\n> w._trainable = False\r\n> ```\r\n> \r\n> But the result is as below:\r\n> \r\n> ```shell\r\n> Total params: 109,483,776\r\n> Trainable params: 109,483,776\r\n> Non-trainable params: 0\r\n> ```\r\n> \r\n> How can i change the specific layers to be trainable?\r\n> \r\n> Thanks\r\n\r\nThe codes below would successfully freezes layers in a model:\r\n```\r\nfrom transformers import TFXLNetModel as transformerModel\r\ntransformer_model_name = 'xlnet-base-cased'\r\ntransformer_model = transformerModel.from_pretrained(transformer_model_name)\r\nfor layer in transformer_model.layers:\r\n layer.trainable = False\r\n```",
"> @nuno-carneiro @thomwolf\r\n> I think, this will freeze all the layers including the classifier layer. (Correct me, if I'm wrong)\r\n> \r\n> ```\r\n> model = BertForSequenceClassification.from_pretrained('bert-base-uncased')\r\n> \r\n> for param in model.bert.parameters():\r\n> param.requires_grad = False\r\n> ```\r\n> \r\n> **I think the below code will freeze only the BERT layers (Correct me, if I'm wrong)**\r\n> \r\n> ```\r\n> model = BertForSequenceClassification.from_pretrained('bert-base-uncased')\r\n> \r\n> for param in model.bert.bert.parameters():\r\n> param.requires_grad = False\r\n> ```\r\n\r\nHi, I have \r\n\r\n> @nuno-carneiro @thomwolf\r\n> I think, this will freeze all the layers including the classifier layer. (Correct me, if I'm wrong)\r\n> \r\n> ```\r\n> model = BertForSequenceClassification.from_pretrained('bert-base-uncased')\r\n> \r\n> for param in model.bert.parameters():\r\n> param.requires_grad = False\r\n> ```\r\n> \r\n> **I think the below code will freeze only the BERT layers (Correct me, if I'm wrong)**\r\n> \r\n> ```\r\n> model = BertForSequenceClassification.from_pretrained('bert-base-uncased')\r\n> \r\n> for param in model.bert.bert.parameters():\r\n> param.requires_grad = False\r\n> ```\r\n\r\nI think there are 2 elements in BertForSequenceClassification. One is bert and another one is classifier?or? So I think we dont need model.bert.bert.parameter(). We just need model.bert.parameters()? or?",
"`model = BertForSequenceClassification.from_pretrained('bert-base-uncased')`\r\n\r\nWhen you run `model` you will get the following architecture (I skipped many intermediate modules)\r\n```\r\nBertForSequenceClassification(\r\n (bert): BertModel(\r\n (embeddings): BertEmbeddings(\r\n ....\r\n (encoder): BertEncoder(\r\n (layer): ModuleList(\r\n (0): BertLayer(\r\n ...\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n ...\r\n (pooler): BertPooler(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (activation): Tanh()\r\n )\r\n )\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n (classifier): Linear(in_features=768, out_features=2, bias=True)\r\n)\r\n```\r\n\r\nSo basically `model` has 3 main submodules `bert`, `dropout`, and `classifier` (you can see this from the indentation as well.). Try running `model.bert`, `model.classifier`. When you call `model.bert `and freeze all the params, it will freeze entire encoder blocks(12 of them).\r\n\r\n\r\nTherefore, the following code\r\n```\r\nfor param in model.bert.bert.parameters():\r\n param.requires_grad = False\r\n```\r\nshould be\r\n\r\n```\r\nfor param in model.bert.parameters():\r\n param.requires_grad = False\r\n```",
"> `model = BertForSequenceClassification.from_pretrained('bert-base-uncased')`\r\n> \r\n> When you run `model` you will get the following architecture (I skipped many intermediate modules)\r\n> \r\n> ```\r\n> BertForSequenceClassification(\r\n> (bert): BertModel(\r\n> (embeddings): BertEmbeddings(\r\n> ....\r\n> (encoder): BertEncoder(\r\n> (layer): ModuleList(\r\n> (0): BertLayer(\r\n> ...\r\n> (output): BertSelfOutput(\r\n> (dense): Linear(in_features=768, out_features=768, bias=True)\r\n> (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n> (dropout): Dropout(p=0.1, inplace=False)\r\n> )\r\n> )\r\n> ...\r\n> (pooler): BertPooler(\r\n> (dense): Linear(in_features=768, out_features=768, bias=True)\r\n> (activation): Tanh()\r\n> )\r\n> )\r\n> (dropout): Dropout(p=0.1, inplace=False)\r\n> (classifier): Linear(in_features=768, out_features=2, bias=True)\r\n> )\r\n> ```\r\n> \r\n> So basically `model` has 3 main submodules `bert`, `dropout`, and `classifier` (you can see this from the indentation as well.). Try running `model.bert`, `model.classifier`. When you call `model.bert `and freeze all the params, it will freeze entire encoder blocks(12 of them).\r\n> \r\n> Therefore, the following code\r\n> \r\n> ```\r\n> for param in model.bert.bert.parameters():\r\n> param.requires_grad = False\r\n> ```\r\n> \r\n> should be\r\n> \r\n> ```\r\n> for param in model.bert.parameters():\r\n> param.requires_grad = False\r\n> ```\r\n\r\nI think you are right. And I think there are no model.bert.bert? Or?",
"> > `model = BertForSequenceClassification.from_pretrained('bert-base-uncased')`\r\n> > When you run `model` you will get the following architecture (I skipped many intermediate modules)\r\n> > ```\r\n> > BertForSequenceClassification(\r\n> > (bert): BertModel(\r\n> > (embeddings): BertEmbeddings(\r\n> > ....\r\n> > (encoder): BertEncoder(\r\n> > (layer): ModuleList(\r\n> > (0): BertLayer(\r\n> > ...\r\n> > (output): BertSelfOutput(\r\n> > (dense): Linear(in_features=768, out_features=768, bias=True)\r\n> > (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n> > (dropout): Dropout(p=0.1, inplace=False)\r\n> > )\r\n> > )\r\n> > ...\r\n> > (pooler): BertPooler(\r\n> > (dense): Linear(in_features=768, out_features=768, bias=True)\r\n> > (activation): Tanh()\r\n> > )\r\n> > )\r\n> > (dropout): Dropout(p=0.1, inplace=False)\r\n> > (classifier): Linear(in_features=768, out_features=2, bias=True)\r\n> > )\r\n> > ```\r\n> > \r\n> > \r\n> > So basically `model` has 3 main submodules `bert`, `dropout`, and `classifier` (you can see this from the indentation as well.). Try running `model.bert`, `model.classifier`. When you call `model.bert `and freeze all the params, it will freeze entire encoder blocks(12 of them).\r\n> > Therefore, the following code\r\n> > ```\r\n> > for param in model.bert.bert.parameters():\r\n> > param.requires_grad = False\r\n> > ```\r\n> > \r\n> > \r\n> > should be\r\n> > ```\r\n> > for param in model.bert.parameters():\r\n> > param.requires_grad = False\r\n> > ```\r\n> \r\n> I think you are right. And I think there are no model.bert.bert? Or?\r\n\r\nI think so, if you want to freeze bert part in bert model, u just say \r\n for param in model.bert.parameters():\r\n param.requires_grad = False\r\nbut how are about optizmer ? \r\n\r\nis look like in this way ? \r\n optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001)",
"> > > `model = BertForSequenceClassification.from_pretrained('bert-base-uncased')`\r\n> > > When you run `model` you will get the following architecture (I skipped many intermediate modules)\r\n> > > ```\r\n> > > BertForSequenceClassification(\r\n> > > (bert): BertModel(\r\n> > > (embeddings): BertEmbeddings(\r\n> > > ....\r\n> > > (encoder): BertEncoder(\r\n> > > (layer): ModuleList(\r\n> > > (0): BertLayer(\r\n> > > ...\r\n> > > (output): BertSelfOutput(\r\n> > > (dense): Linear(in_features=768, out_features=768, bias=True)\r\n> > > (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n> > > (dropout): Dropout(p=0.1, inplace=False)\r\n> > > )\r\n> > > )\r\n> > > ...\r\n> > > (pooler): BertPooler(\r\n> > > (dense): Linear(in_features=768, out_features=768, bias=True)\r\n> > > (activation): Tanh()\r\n> > > )\r\n> > > )\r\n> > > (dropout): Dropout(p=0.1, inplace=False)\r\n> > > (classifier): Linear(in_features=768, out_features=2, bias=True)\r\n> > > )\r\n> > > ```\r\n> > > \r\n> > > \r\n> > > So basically `model` has 3 main submodules `bert`, `dropout`, and `classifier` (you can see this from the indentation as well.). Try running `model.bert`, `model.classifier`. When you call `model.bert `and freeze all the params, it will freeze entire encoder blocks(12 of them).\r\n> > > Therefore, the following code\r\n> > > ```\r\n> > > for param in model.bert.bert.parameters():\r\n> > > param.requires_grad = False\r\n> > > ```\r\n> > > \r\n> > > \r\n> > > should be\r\n> > > ```\r\n> > > for param in model.bert.parameters():\r\n> > > param.requires_grad = False\r\n> > > ```\r\n> > \r\n> > \r\n> > I think you are right. And I think there are no model.bert.bert? Or?\r\n> \r\n> I think so, if you want to freeze bert part in bert model, u just say\r\n> for param in model.bert.parameters():\r\n> param.requires_grad = False\r\n> but how are about optizmer ?\r\n> \r\n> is look like in this way ?\r\n> optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001)\r\n\r\nI think you have written right code. But we should write usually 2 parts together. I mean:\r\nfor param in model.bert.parameters():\r\nparam.requires_grad = False\r\noptimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001)\r\nCuz we need to know which parameters are frozen. ",
"> > > > `model = BertForSequenceClassification.from_pretrained('bert-base-uncased')`\r\n> > > > When you run `model` you will get the following architecture (I skipped many intermediate modules)\r\n> > > > ```\r\n> > > > BertForSequenceClassification(\r\n> > > > (bert): BertModel(\r\n> > > > (embeddings): BertEmbeddings(\r\n> > > > ....\r\n> > > > (encoder): BertEncoder(\r\n> > > > (layer): ModuleList(\r\n> > > > (0): BertLayer(\r\n> > > > ...\r\n> > > > (output): BertSelfOutput(\r\n> > > > (dense): Linear(in_features=768, out_features=768, bias=True)\r\n> > > > (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n> > > > (dropout): Dropout(p=0.1, inplace=False)\r\n> > > > )\r\n> > > > )\r\n> > > > ...\r\n> > > > (pooler): BertPooler(\r\n> > > > (dense): Linear(in_features=768, out_features=768, bias=True)\r\n> > > > (activation): Tanh()\r\n> > > > )\r\n> > > > )\r\n> > > > (dropout): Dropout(p=0.1, inplace=False)\r\n> > > > (classifier): Linear(in_features=768, out_features=2, bias=True)\r\n> > > > )\r\n> > > > ```\r\n> > > > \r\n> > > > \r\n> > > > So basically `model` has 3 main submodules `bert`, `dropout`, and `classifier` (you can see this from the indentation as well.). Try running `model.bert`, `model.classifier`. When you call `model.bert `and freeze all the params, it will freeze entire encoder blocks(12 of them).\r\n> > > > Therefore, the following code\r\n> > > > ```\r\n> > > > for param in model.bert.bert.parameters():\r\n> > > > param.requires_grad = False\r\n> > > > ```\r\n> > > > \r\n> > > > \r\n> > > > should be\r\n> > > > ```\r\n> > > > for param in model.bert.parameters():\r\n> > > > param.requires_grad = False\r\n> > > > ```\r\n> > > \r\n> > > \r\n> > > I think you are right. And I think there are no model.bert.bert? Or?\r\n> > \r\n> > \r\n> > I think so, if you want to freeze bert part in bert model, u just say\r\n> > for param in model.bert.parameters():\r\n> > param.requires_grad = False\r\n> > but how are about optizmer ?\r\n> > is look like in this way ?\r\n> > optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001)\r\n> \r\n> I think you have written right code. But we should write usually 2 parts together. I mean:\r\n> for param in model.bert.parameters():\r\n> param.requires_grad = False\r\n> optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001)\r\n> Cuz we need to know which parameters are frozen.\r\n\r\nsorry, I do not know what do you mean?\r\nso let say, if we just want to freeze the input part ( input embedding). how can I write the code? \r\nbert = BertModel.from_pretrained('bert-base-uncased')\r\n\r\nfor name, param in bert.named_parameters(): \r\n if name.startswith('embeddings'):\r\n param.requires_grad = False\r\nthis above three line, is tell model do not train the embedding right?\r\n\r\nbut how to tell the optimizer to do not change the embedding?\r\n\r\n",
"> > > > > `model = BertForSequenceClassification.from_pretrained('bert-base-uncased')`\r\n> > > > > When you run `model` you will get the following architecture (I skipped many intermediate modules)\r\n> > > > > ```\r\n> > > > > BertForSequenceClassification(\r\n> > > > > (bert): BertModel(\r\n> > > > > (embeddings): BertEmbeddings(\r\n> > > > > ....\r\n> > > > > (encoder): BertEncoder(\r\n> > > > > (layer): ModuleList(\r\n> > > > > (0): BertLayer(\r\n> > > > > ...\r\n> > > > > (output): BertSelfOutput(\r\n> > > > > (dense): Linear(in_features=768, out_features=768, bias=True)\r\n> > > > > (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n> > > > > (dropout): Dropout(p=0.1, inplace=False)\r\n> > > > > )\r\n> > > > > )\r\n> > > > > ...\r\n> > > > > (pooler): BertPooler(\r\n> > > > > (dense): Linear(in_features=768, out_features=768, bias=True)\r\n> > > > > (activation): Tanh()\r\n> > > > > )\r\n> > > > > )\r\n> > > > > (dropout): Dropout(p=0.1, inplace=False)\r\n> > > > > (classifier): Linear(in_features=768, out_features=2, bias=True)\r\n> > > > > )\r\n> > > > > ```\r\n> > > > > \r\n> > > > > \r\n> > > > > So basically `model` has 3 main submodules `bert`, `dropout`, and `classifier` (you can see this from the indentation as well.). Try running `model.bert`, `model.classifier`. When you call `model.bert `and freeze all the params, it will freeze entire encoder blocks(12 of them).\r\n> > > > > Therefore, the following code\r\n> > > > > ```\r\n> > > > > for param in model.bert.bert.parameters():\r\n> > > > > param.requires_grad = False\r\n> > > > > ```\r\n> > > > > \r\n> > > > > \r\n> > > > > should be\r\n> > > > > ```\r\n> > > > > for param in model.bert.parameters():\r\n> > > > > param.requires_grad = False\r\n> > > > > ```\r\n> > > > \r\n> > > > \r\n> > > > I think you are right. And I think there are no model.bert.bert? Or?\r\n> > > \r\n> > > \r\n> > > I think so, if you want to freeze bert part in bert model, u just say\r\n> > > for param in model.bert.parameters():\r\n> > > param.requires_grad = False\r\n> > > but how are about optizmer ?\r\n> > > is look like in this way ?\r\n> > > optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001)\r\n> > \r\n> > \r\n> > I think you have written right code. But we should write usually 2 parts together. I mean:\r\n> > for param in model.bert.parameters():\r\n> > param.requires_grad = False\r\n> > optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001)\r\n> > Cuz we need to know which parameters are frozen.\r\n> \r\n> sorry, I do not know what do you mean?\r\n> so let say, if we just want to freeze the input part ( input embedding). how can I write the code?\r\n> bert = BertModel.from_pretrained('bert-base-uncased')\r\n> \r\n> for name, param in bert.named_parameters():\r\n> if name.startswith('embeddings'):\r\n> param.requires_grad = False\r\n> this above three line, is tell model do not train the embedding right?\r\n> \r\n> but how to tell the optimizer to do not change the embedding?\r\n\r\nthese three lines have set False in 'Embedding' right? Then we need just to write what you have written:\r\noptimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00001)",
"Are any of the layers in a pre-trained BERT model originally frozen? If not are there any supporting articles or research papers that I can look to? ",
"> Are any of the layers in a pre-trained BERT model originally frozen? If not are there any supporting articles or research papers that I can look to?\r\n\r\nI dont think there are any of layers originally frozen. You can read some Papers about Bert firstly, then in Pytoch forum or Tensorflow you could learn how to freeze the layers.",
"In tensorflow, other than the classifier layers, to freeze some layers in a underlying base transformer model something like this maybe needed(corrections are welcome):\r\n\r\n```\r\nbertmodel = TFAutoModelForSequenceClassification.from_pretrained(bertModelName, num_labels=output_num_classes, from_pt=True)\r\n\r\nfor i,v in enumerate(bertmodel.layers[0].variables):\r\n if i>=19:\r\n bertmodel.layers[0].variables[i] = tf.stop_gradient(v)\r\n print(\"do freeze {} {}\".format(i, v.name))\r\n else:\r\n print(\"not freeze {} {}\".format(i, v.name))\r\n```",
"If I want to freeze the entire model, I can use model.trainable = False and vice vs, however, if I want to freeze part of it, I am setting that up in the loop. It's still not updating the trainable flag. I am using TensorFlow. Any thoughts? \r\n\r\n```\r\n# Adjust the trainable layer weights based on retrain_layer_count\r\n # If retrain_layer_count is 0, then base model is frozen.\r\n # If retrain_layer_count is 12, then the entire base model is trainable.\r\n # And that implies that all the pretrained weights are lost and it relearns\r\n # from the input data.\r\n # If retrain_layer_count is between 1 and 11, then the last n layers of\r\n # the pretrained model retrained.\r\n if retrain_layer_count == 0:\r\n # The pretained model is frozen\r\n model.trainable = False \r\n\r\n elif retrain_layer_count == 12: \r\n # The pretrained model is retrained thru all layers. \r\n model.trainable = True \r\n\r\n else: \r\n # Restrict training to the num_train_layers outer transformer layers\r\n retrain_layer_list = []\r\n #model.trainable = False \r\n for retrain_layer_number in range(retrain_layer_count):\r\n\r\n layer_code = '_' + str(11 - retrain_layer_number)\r\n retrain_layer_list.append(layer_code)\r\n \r\n print('Retrain layers: \\n', retrain_layer_list)\r\n print(\"After adjusting...\")\r\n for layer in model.weights:\r\n layer._trainable = False\r\n print(\"***\", layer.name, layer._trainable)\r\n if 'layer_' in layer.name and layer.name.split(\".\")[1].split(\"/\")[0] in retrain_layer_list:\r\n layer._trainable = True\r\n print(\"$$$\", layer.name, layer._trainable)\r\n elif 'layer_' not in layer.name :\r\n layer._trainable = True\r\n print(\"###\", layer.name, layer._trainable)\r\n \r\n #for weight_details in model.weights:\r\n # print(weight_details.name, weight_details._trainable)\r\n print(f\"Number of trainable parameters : {count_params(model.trainable_weights)}\")\r\n print(f\"Number of non-trainable parameters : {count_params(model.non_trainable_variables)}\")\r\n```\r\n \r\n ",
"> If I want to freeze the entire model, I can use model.trainable = False and vice vs, however, if I want to freeze part of it, I am setting that up in the loop. It's still not updating the trainable flag. I am using TensorFlow. Any thoughts?\r\n> \r\n> ```\r\n> # Adjust the trainable layer weights based on retrain_layer_count\r\n> # If retrain_layer_count is 0, then base model is frozen.\r\n> # If retrain_layer_count is 12, then the entire base model is trainable.\r\n> # And that implies that all the pretrained weights are lost and it relearns\r\n> # from the input data.\r\n> # If retrain_layer_count is between 1 and 11, then the last n layers of\r\n> # the pretrained model retrained.\r\n> if retrain_layer_count == 0:\r\n> # The pretained model is frozen\r\n> model.trainable = False \r\n> \r\n> elif retrain_layer_count == 12: \r\n> # The pretrained model is retrained thru all layers. \r\n> model.trainable = True \r\n> \r\n> else: \r\n> # Restrict training to the num_train_layers outer transformer layers\r\n> retrain_layer_list = []\r\n> #model.trainable = False \r\n> for retrain_layer_number in range(retrain_layer_count):\r\n> \r\n> layer_code = '_' + str(11 - retrain_layer_number)\r\n> retrain_layer_list.append(layer_code)\r\n> \r\n> print('Retrain layers: \\n', retrain_layer_list)\r\n> print(\"After adjusting...\")\r\n> for layer in model.weights:\r\n> layer._trainable = False\r\n> print(\"***\", layer.name, layer._trainable)\r\n> if 'layer_' in layer.name and layer.name.split(\".\")[1].split(\"/\")[0] in retrain_layer_list:\r\n> layer._trainable = True\r\n> print(\"$$$\", layer.name, layer._trainable)\r\n> elif 'layer_' not in layer.name :\r\n> layer._trainable = True\r\n> print(\"###\", layer.name, layer._trainable)\r\n> \r\n> #for weight_details in model.weights:\r\n> # print(weight_details.name, weight_details._trainable)\r\n> print(f\"Number of trainable parameters : {count_params(model.trainable_weights)}\")\r\n> print(f\"Number of non-trainable parameters : {count_params(model.non_trainable_variables)}\")\r\n> ```\r\n\r\nI ran into this issue too, where if I first freeze all layers, and then selectively make some trainable, it does not work. Do it the other way round: set everything trainable and then freeze the layers you want!"
] | 1,553 | 1,685 | 1,553 | NONE | null | How to freeze all layers of bert and just train task based classifier? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/400/reactions",
"total_count": 49,
"+1": 46,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/400/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/399 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/399/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/399/comments | https://api.github.com/repos/huggingface/transformers/issues/399/events | https://github.com/huggingface/transformers/issues/399 | 424,392,950 | MDU6SXNzdWU0MjQzOTI5NTA= | 399 | Is the GPT-2 pretrained model language agnostic? | {
"login": "AlyShmahell",
"id": 6887917,
"node_id": "MDQ6VXNlcjY4ODc5MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6887917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlyShmahell",
"html_url": "https://github.com/AlyShmahell",
"followers_url": "https://api.github.com/users/AlyShmahell/followers",
"following_url": "https://api.github.com/users/AlyShmahell/following{/other_user}",
"gists_url": "https://api.github.com/users/AlyShmahell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlyShmahell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlyShmahell/subscriptions",
"organizations_url": "https://api.github.com/users/AlyShmahell/orgs",
"repos_url": "https://api.github.com/users/AlyShmahell/repos",
"events_url": "https://api.github.com/users/AlyShmahell/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlyShmahell/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi Aly, GPT-2 is pretrained on an English only corpus.",
"Hi @thomwolf , thank you for the clarification."
] | 1,553 | 1,553 | 1,553 | NONE | null | I'm trying to build a language model that trains on a Polish corpus. And I'm wondering if the GPT-2 pretrained model you present supports that, or if it's English only.
Thank You. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/399/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/398 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/398/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/398/comments | https://api.github.com/repos/huggingface/transformers/issues/398/events | https://github.com/huggingface/transformers/pull/398 | 424,000,240 | MDExOlB1bGxSZXF1ZXN0MjYzNDUwODM5 | 398 | Multi GPU | {
"login": "dirkgr",
"id": 920638,
"node_id": "MDQ6VXNlcjkyMDYzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/920638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dirkgr",
"html_url": "https://github.com/dirkgr",
"followers_url": "https://api.github.com/users/dirkgr/followers",
"following_url": "https://api.github.com/users/dirkgr/following{/other_user}",
"gists_url": "https://api.github.com/users/dirkgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dirkgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dirkgr/subscriptions",
"organizations_url": "https://api.github.com/users/dirkgr/orgs",
"repos_url": "https://api.github.com/users/dirkgr/repos",
"events_url": "https://api.github.com/users/dirkgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dirkgr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @dirkgr, thanks for this PR.\r\n\r\nI think we will keep all the GPU/multi-GPU logic outside of the main library for now. It makes it easier to integrate the module in downstream libraries and integrating such modifications at the current stage would cause too many breaking changes for the users unfortunately.\r\n\r\nSo I'm closing this PR for now.",
"Oh, this was only meant for internal consumption.\r\n\r\nThat said, I thought a bit about how to make it general enough to properly integrate it. I thought the setting for multi-GPU could be part of the config, so it wouldn't affect normal operation. How would you want to do it?\r\n\r\nI didn't get that much interest in this patch internally though, so this is not the most important project right now."
] | 1,553 | 1,553 | 1,553 | CONTRIBUTOR | null | This is an incomplete proof-of-concept of how to run BERT across multiple GPUs. It will take advantage of multiple GPUs' memory, but not of their compute cores.
Do not merge | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/398/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/398",
"html_url": "https://github.com/huggingface/transformers/pull/398",
"diff_url": "https://github.com/huggingface/transformers/pull/398.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/398.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/397 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/397/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/397/comments | https://api.github.com/repos/huggingface/transformers/issues/397/events | https://github.com/huggingface/transformers/issues/397 | 423,773,597 | MDU6SXNzdWU0MjM3NzM1OTc= | 397 | Allow do_lower_case regardless of do_basic_tokenize | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@thomwolf Please re-open. If I have the time, I can try to work on this.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Bump so I don't forget about this.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,553 | 1,592 | 1,592 | COLLABORATOR | null | It would make sense to me that you could set `do_lower_case=True` even when `do_basic_tokenize=False`. If your input has already been tokenized (but not lower-cased), you still want to lowercase it. As the code is currently written, that does not seem possible as [the BasicTokenizer is responsible for the lowercasing](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization.py#L101-L105):
if do_basic_tokenize:
self.basic_tokenizer = BasicTokenizer(do_lower_case=do_lower_case,
never_split=never_split)
self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab)
self.max_len = max_len if max_len is not None else int(1e12)
I would then propose to add lowercase to its own method or to add a do_lower_case to WordpieceTokenizer.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/397/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/396 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/396/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/396/comments | https://api.github.com/repos/huggingface/transformers/issues/396/events | https://github.com/huggingface/transformers/pull/396 | 423,718,783 | MDExOlB1bGxSZXF1ZXN0MjYzMjI2Nzk2 | 396 | add tqdm to the process of eval in examples/run_swag.py | {
"login": "IndexFziQ",
"id": 15137975,
"node_id": "MDQ6VXNlcjE1MTM3OTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/15137975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IndexFziQ",
"html_url": "https://github.com/IndexFziQ",
"followers_url": "https://api.github.com/users/IndexFziQ/followers",
"following_url": "https://api.github.com/users/IndexFziQ/following{/other_user}",
"gists_url": "https://api.github.com/users/IndexFziQ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IndexFziQ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IndexFziQ/subscriptions",
"organizations_url": "https://api.github.com/users/IndexFziQ/orgs",
"repos_url": "https://api.github.com/users/IndexFziQ/repos",
"events_url": "https://api.github.com/users/IndexFziQ/events{/privacy}",
"received_events_url": "https://api.github.com/users/IndexFziQ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"tests/tokenization_openai_test.py::OpenAIGPTTokenizationTest::test_full_tokenizer FAILED",
"Ok, thanks!"
] | 1,553 | 1,553 | 1,553 | CONTRIBUTOR | null | Maybe better. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/396/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/396",
"html_url": "https://github.com/huggingface/transformers/pull/396",
"diff_url": "https://github.com/huggingface/transformers/pull/396.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/396.patch",
"merged_at": 1553684606000
} |
https://api.github.com/repos/huggingface/transformers/issues/395 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/395/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/395/comments | https://api.github.com/repos/huggingface/transformers/issues/395/events | https://github.com/huggingface/transformers/issues/395 | 423,581,764 | MDU6SXNzdWU0MjM1ODE3NjQ= | 395 | AttributeError: 'BertOnlyMLMHead' object has no attribute 'seq_relationship' | {
"login": "leejason",
"id": 4224456,
"node_id": "MDQ6VXNlcjQyMjQ0NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4224456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leejason",
"html_url": "https://github.com/leejason",
"followers_url": "https://api.github.com/users/leejason/followers",
"following_url": "https://api.github.com/users/leejason/following{/other_user}",
"gists_url": "https://api.github.com/users/leejason/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leejason/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leejason/subscriptions",
"organizations_url": "https://api.github.com/users/leejason/orgs",
"repos_url": "https://api.github.com/users/leejason/repos",
"events_url": "https://api.github.com/users/leejason/events{/privacy}",
"received_events_url": "https://api.github.com/users/leejason/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi,\r\nWhat command are you running to load the model?\r\nAre you loading your own model or one or our pre-trained one?\r\n\r\nIt's normal that there is no `seq_relationship` attribute in a `BertOnlyMLMHead` but our pre-trained model should load without error.",
"\r\nI was loading my own pre-trained model by \"convert_tf_checkpoint_to_pytorch.py\". Would it be ok if I skip \"seq_relationship\" in the source code? ",
"Maybe, I don't have enough information to really be able to help you at this stage.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,553 | 1,559 | 1,559 | NONE | null | Is there way to fix it?
```bash
Skipping cls/predictions/transform/dense/kernel/adam_m
Skipping cls/predictions/transform/dense/kernel/adam_v
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-11-5d7ca59bd98d> in <module>()
7 model = BertForMaskedLM.from_pretrained(
8 pretrained_model_name_or_path=model_folder,
----> 9 from_tf=True, cache_dir=None)
10 #model = BertForMaskedLM.from_pretrained(model_version)
11
/content/my_pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py in from_pretrained(cls, pretrained_model_name_or_path, state_dict, cache_dir, from_tf, *inputs, **kwargs)
605 # Directly load from a TensorFlow checkpoint
606 weights_path = os.path.join(serialization_dir, TF_WEIGHTS_NAME)
--> 607 return load_tf_weights_in_bert(model, weights_path)
608 # Load from a PyTorch state_dict
609 old_keys = []
/content/my_pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py in load_tf_weights_in_bert(model, tf_checkpoint_path)
100 #
101 else:
--> 102 pointer = getattr(pointer, l[0])
103 '''
104 #J
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)
533 return modules[name]
534 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 535 type(self).__name__, name))
536
537 def __setattr__(self, name, value):
AttributeError: 'BertOnlyMLMHead' object has no attribute 'seq_relationship'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/395/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/394 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/394/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/394/comments | https://api.github.com/repos/huggingface/transformers/issues/394/events | https://github.com/huggingface/transformers/pull/394 | 423,573,705 | MDExOlB1bGxSZXF1ZXN0MjYzMTE3Mzk4 | 394 | Minor change in README | {
"login": "desireevl",
"id": 17139032,
"node_id": "MDQ6VXNlcjE3MTM5MDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/17139032?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/desireevl",
"html_url": "https://github.com/desireevl",
"followers_url": "https://api.github.com/users/desireevl/followers",
"following_url": "https://api.github.com/users/desireevl/following{/other_user}",
"gists_url": "https://api.github.com/users/desireevl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/desireevl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/desireevl/subscriptions",
"organizations_url": "https://api.github.com/users/desireevl/orgs",
"repos_url": "https://api.github.com/users/desireevl/repos",
"events_url": "https://api.github.com/users/desireevl/events{/privacy}",
"received_events_url": "https://api.github.com/users/desireevl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,553 | 1,553 | 1,553 | CONTRIBUTOR | null | Spelling fix of: weigths to weights | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/394/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/394",
"html_url": "https://github.com/huggingface/transformers/pull/394",
"diff_url": "https://github.com/huggingface/transformers/pull/394.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/394.patch",
"merged_at": 1553684580000
} |
https://api.github.com/repos/huggingface/transformers/issues/393 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/393/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/393/comments | https://api.github.com/repos/huggingface/transformers/issues/393/events | https://github.com/huggingface/transformers/issues/393 | 423,520,906 | MDU6SXNzdWU0MjM1MjA5MDY= | 393 | AttributeError: 'BertForPreTraining' object has no attribute 'shape' | {
"login": "leejason",
"id": 4224456,
"node_id": "MDQ6VXNlcjQyMjQ0NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4224456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leejason",
"html_url": "https://github.com/leejason",
"followers_url": "https://api.github.com/users/leejason/followers",
"following_url": "https://api.github.com/users/leejason/following{/other_user}",
"gists_url": "https://api.github.com/users/leejason/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leejason/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leejason/subscriptions",
"organizations_url": "https://api.github.com/users/leejason/orgs",
"repos_url": "https://api.github.com/users/leejason/repos",
"events_url": "https://api.github.com/users/leejason/events{/privacy}",
"received_events_url": "https://api.github.com/users/leejason/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi,\r\nIs it a model trained from the original Google BERT Tensorflow implementation?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I get the same error as @leejason. I used the pre-trained BERT base uncased model from the original TF implementation and fine-tuned it on my own training set. I am now trying to use the fine-tuned model to do masked LM. ",
"I also have similar issue. I pretrained a bert **from scratch** using **nvidia implementation** with **customized config file and vocab**.\r\nThen I use\r\n```\r\nconvert_tf_checkpoint_to_pytorch.convert_tf_checkpoint_to_pytorch(BERT_MODEL_PATH + 'model.ckpt',\r\n BERT_MODEL_PATH + 'bert_config.json',\r\n BERT_MODEL_PATH + 'pytorch_model.bin')\r\n```\r\n```\r\n...\r\nLoading TF weight cls/predictions/transform/dense/bias with shape [512]\r\nLoading TF weight cls/predictions/transform/dense/bias/adam_m with shape [512]\r\nLoading TF weight cls/predictions/transform/dense/bias/adam_v with shape [512]\r\nLoading TF weight cls/predictions/transform/dense/kernel with shape [512, 512]\r\nLoading TF weight cls/predictions/transform/dense/kernel/adam_m with shape [512, 512]\r\nLoading TF weight cls/predictions/transform/dense/kernel/adam_v with shape [512, 512]\r\nLoading TF weight cls/seq_relationship/output_bias with shape [2]\r\nLoading TF weight cls/seq_relationship/output_weights with shape [2, 512]\r\nLoading TF weight global_step with shape []\r\nLoading TF weight good_steps with shape []\r\nLoading TF weight loss_scale with shape []\r\nSkipping bad_steps\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-20-f28c5910adfc> in <module>\r\n----> 1 bert = BertModel.from_pretrained(BERT_MODEL_PATH, from_tf=True).bert()\r\n\r\n~/InEx/input/huggingface/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)\r\n 611 # Directly load from a TensorFlow checkpoint\r\n 612 weights_path = os.path.join(serialization_dir, TF_WEIGHTS_NAME)\r\n--> 613 return load_tf_weights_in_bert(model, weights_path)\r\n 614 # Load from a PyTorch state_dict\r\n 615 old_keys = []\r\n\r\n~/InEx/input/huggingface/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py in load_tf_weights_in_bert(model, tf_checkpoint_path)\r\n 107 array = np.transpose(array)\r\n 108 try:\r\n--> 109 assert pointer.shape == array.shape\r\n 110 except AssertionError as e:\r\n 111 e.args += (pointer.shape, array.shape)\r\n\r\n~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __getattr__(self, name)\r\n 537 return modules[name]\r\n 538 raise AttributeError(\"'{}' object has no attribute '{}'\".format(\r\n--> 539 type(self).__name__, name))\r\n 540 \r\n 541 def __setattr__(self, name, value):\r\n\r\nAttributeError: 'BertModel' object has no attribute 'shape'```",
"I solve this problem by bypass some variables in the model, such as \"bad_steps\", “global_step\", \"good_steps\", \"loss_scale\". They don't have attribute 'shape‘ and I don't need them when fineturning the model.\r\n\r\nIn modeling.py, line 121, replace it with \r\n if any(n in [\"adam_v\", \"adam_m\", \"global_step\", \"bad_steps\", \"global_step\", \"good_steps\", \"loss_scale\"] for n in name):\r\nand delete line 151-156.",
"> I solve this problem by bypass some variables in the model, such as \"bad_steps\", “global_step\", \"good_steps\", \"loss_scale\". They don't have attribute 'shape‘ and I don't need them when fineturning the model.\r\n> \r\n> In modeling.py, line 121, replace it with\r\n> if any(n in [\"adam_v\", \"adam_m\", \"global_step\", \"bad_steps\", \"global_step\", \"good_steps\", \"loss_scale\"] for n in name):\r\n> and delete line 151-156.\r\n\r\nIt works. Thanks very much !",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I'm getting a similar error when trying to convert the newer BERT models released at\r\n[tensorflow/models/tree/master/official/nlp/](https://github.com/tensorflow/models/tree/master/official/nlp/bert#pre-trained-models).\r\n\r\nThese models are either BERT models trained with Keras or else checkpoints converted from \r\nthe original [google-research/bert](https://github.com/google-research/bert) repository. I also get the same error when I convert the TF1 to TF2 checkpoints myself using the [tf2_encoder_checkpoint_converter.py](https://github.com/tensorflow/models/blob/master/official/nlp/bert/tf2_encoder_checkpoint_converter.py) script: \r\n\r\nWhat I have tried:\r\n\r\nFirst, I have downloaded a model:\r\n```\r\nwget https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/cased_L-12_H-768_A-12.tar.gz\r\n# or\r\nwget https://storage.googleapis.com/cloud-tpu-checkpoints/bert/tf_20/cased_L-12_H-768_A-12.tar.gz \r\n```\r\nAfter unpacking:\r\n```\r\nexport BERT_BASE_DIR=cased_L-12_H-768_A-12\r\n\r\ntransformers-cli convert --model_type bert \\\r\n --tf_checkpoint $BERT_BASE_DIR/bert_model.ckpt \\\r\n --config $BERT_BASE_DIR/bert_config.json \\\r\n --pytorch_dump_output $BERT_BASE_DIR/pytorch_model.bin\r\n```\r\n\r\nThe command prints the configuration but throws the following error: \r\n\r\n```\r\nINFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_layer/_value_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [768, 12, 64]\r\nINFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_layer_norm/beta/.ATTRIBUTES/VARIABLE_VALUE with shape [768]\r\nINFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_layer_norm/gamma/.ATTRIBUTES/VARIABLE_VALUE with shape [768]\r\nINFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_output_dense/bias/.ATTRIBUTES/VARIABLE_VALUE with shape [768]\r\nINFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_output_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [12, 64, 768]\r\nINFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_intermediate_dense/bias/.ATTRIBUTES/VARIABLE_VALUE with shape [3072]\r\nINFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_intermediate_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [768, 3072]\r\nINFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_dense/bias/.ATTRIBUTES/VARIABLE_VALUE with shape [768]\r\nINFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [3072, 768]\r\nINFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_layer_norm/beta/.ATTRIBUTES/VARIABLE_VALUE with shape [768]\r\nINFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_layer_norm/gamma/.ATTRIBUTES/VARIABLE_VALUE with shape [768]\r\nINFO:transformers.modeling_bert:Loading TF weight save_counter/.ATTRIBUTES/VARIABLE_VALUE with shape []\r\nINFO:transformers.modeling_bert:Skipping _CHECKPOINTABLE_OBJECT_GRAPH\r\nTraceback (most recent call last):\r\n File \"/home/jbarry/anaconda3/envs/transformers/bin/transformers-cli\", line 30, in <module>\r\n service.run()\r\n File \"/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/commands/convert.py\", line 62, in run\r\n convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output)\r\n File \"/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py\", line 36, in convert_tf_checkpoint_to_pytorch\r\n load_tf_weights_in_bert(model, config, tf_checkpoint_path)\r\n File \"/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 118, in load_tf_weights_in_bert\r\n assert pointer.shape == array.shape\r\n File \"/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 585, in __getattr__\r\n type(self).__name__, name))\r\nAttributeError: 'BertForPreTraining' object has no attribute 'shape'\r\n```\r\n\r\nThis is happening in a fresh environment with PyTorch 1.3 installed in Anaconda (Linux), as well as pip-installing `tf-nightly` and `transformers` (2.3.0).\r\n\r\nHas anyone else been able to successfully convert the TF 2.0 version models to PyTorch or know where I'm going wrong? Thanks!",
"> I'm getting a similar error when trying to convert the newer BERT models released at\r\n> [tensorflow/models/tree/master/official/nlp/](https://github.com/tensorflow/models/tree/master/official/nlp/bert#pre-trained-models).\r\n> \r\n> These models are either BERT models trained with Keras or else checkpoints converted from\r\n> the original [google-research/bert](https://github.com/google-research/bert) repository. I also get the same error when I convert the TF1 to TF2 checkpoints myself using the [tf2_encoder_checkpoint_converter.py](https://github.com/tensorflow/models/blob/master/official/nlp/bert/tf2_encoder_checkpoint_converter.py) script:\r\n> \r\n> What I have tried:\r\n> \r\n> First, I have downloaded a model:\r\n> \r\n> ```\r\n> wget https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/cased_L-12_H-768_A-12.tar.gz\r\n> # or\r\n> wget https://storage.googleapis.com/cloud-tpu-checkpoints/bert/tf_20/cased_L-12_H-768_A-12.tar.gz \r\n> ```\r\n> \r\n> After unpacking:\r\n> \r\n> ```\r\n> export BERT_BASE_DIR=cased_L-12_H-768_A-12\r\n> \r\n> transformers-cli convert --model_type bert \\\r\n> --tf_checkpoint $BERT_BASE_DIR/bert_model.ckpt \\\r\n> --config $BERT_BASE_DIR/bert_config.json \\\r\n> --pytorch_dump_output $BERT_BASE_DIR/pytorch_model.bin\r\n> ```\r\n> \r\n> The command prints the configuration but throws the following error:\r\n> \r\n> ```\r\n> INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_layer/_value_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [768, 12, 64]\r\n> INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_layer_norm/beta/.ATTRIBUTES/VARIABLE_VALUE with shape [768]\r\n> INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_layer_norm/gamma/.ATTRIBUTES/VARIABLE_VALUE with shape [768]\r\n> INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_output_dense/bias/.ATTRIBUTES/VARIABLE_VALUE with shape [768]\r\n> INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_attention_output_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [12, 64, 768]\r\n> INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_intermediate_dense/bias/.ATTRIBUTES/VARIABLE_VALUE with shape [3072]\r\n> INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_intermediate_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [768, 3072]\r\n> INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_dense/bias/.ATTRIBUTES/VARIABLE_VALUE with shape [768]\r\n> INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_dense/kernel/.ATTRIBUTES/VARIABLE_VALUE with shape [3072, 768]\r\n> INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_layer_norm/beta/.ATTRIBUTES/VARIABLE_VALUE with shape [768]\r\n> INFO:transformers.modeling_bert:Loading TF weight model/layer_with_weights-9/_output_layer_norm/gamma/.ATTRIBUTES/VARIABLE_VALUE with shape [768]\r\n> INFO:transformers.modeling_bert:Loading TF weight save_counter/.ATTRIBUTES/VARIABLE_VALUE with shape []\r\n> INFO:transformers.modeling_bert:Skipping _CHECKPOINTABLE_OBJECT_GRAPH\r\n> Traceback (most recent call last):\r\n> File \"/home/jbarry/anaconda3/envs/transformers/bin/transformers-cli\", line 30, in <module>\r\n> service.run()\r\n> File \"/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/commands/convert.py\", line 62, in run\r\n> convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output)\r\n> File \"/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py\", line 36, in convert_tf_checkpoint_to_pytorch\r\n> load_tf_weights_in_bert(model, config, tf_checkpoint_path)\r\n> File \"/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 118, in load_tf_weights_in_bert\r\n> assert pointer.shape == array.shape\r\n> File \"/home/jbarry/anaconda3/envs/transformers/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 585, in __getattr__\r\n> type(self).__name__, name))\r\n> AttributeError: 'BertForPreTraining' object has no attribute 'shape'\r\n> ```\r\n> \r\n> This is happening in a fresh environment with PyTorch 1.3 installed in Anaconda (Linux), as well as pip-installing `tf-nightly` and `transformers` (2.3.0).\r\n> \r\n> Has anyone else been able to successfully convert the TF 2.0 version models to PyTorch or know where I'm going wrong? Thanks!\r\n\r\nignore those lines causing erros by changing\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L123-L127 to \r\n try:\r\n assert pointer.shape == array.shape\r\n except:\r\n pass\r\n\r\nsame thing for https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L129\r\n",
"> \r\n> \r\n> I solve this problem by bypass some variables in the model, such as \"bad_steps\", “global_step\", \"good_steps\", \"loss_scale\". They don't have attribute 'shape‘ and I don't need them when fineturning the model.\r\n> \r\n> In modeling.py, line 121, replace it with\r\n> if any(n in [\"adam_v\", \"adam_m\", \"global_step\", \"bad_steps\", \"global_step\", \"good_steps\", \"loss_scale\"] for n in name):\r\n> and delete line 151-156.\r\n\r\nIt helps! Thx u so much <3",
"I'm running into the same problem, I tried the solution proposed by @yzhang123 but it only makes me run in another error \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\lrizzello\\AppData\\Local\\JetBrains\\PyCharm 2019.3.4\\plugins\\python\\helpers\\pydev\\pydevd.py\", line 1434, in _exec\r\n pydev_imports.execfile(file, globals, locals) # execute the script\r\n File \"C:\\Users\\lrizzello\\AppData\\Local\\JetBrains\\PyCharm 2019.3.4\\plugins\\python\\helpers\\pydev\\_pydev_imps\\_pydev_execfile.py\", line 18, in execfile\r\n exec(compile(contents+\"\\n\", file, 'exec'), glob, loc)\r\n File \"C:/source/repos/DeduplicationSiamese/PythonDeduper/eval/eval_matcher_tf.py\", line 91, in <module>\r\n load_tf_weights_in_bert(model, config, tf_path)\r\n File \"C:\\Users\\lrizzello\\Anaconda3\\envs\\dedupe_transformer\\lib\\site-packages\\transformers\\modeling_bert.py\", line 129, in load_tf_weights_in_bert\r\n pointer.data = torch.from_numpy(array)\r\nTypeError: expected np.ndarray (got bytes)\r\n\r\nProcess finished with exit code -1\r\n\r\n```\r\n\r\nI got those checkpoints by training an existing huggingface model (namely 'google/bert_uncased_L-12_H-256_A-4') via the TFTrainer/TFTrainingArguments method\r\n\r\nI have tried many other hacks to get this working, such as [this one](https://stackoverflow.com/questions/60539758/biobert-for-keras-version-of-huggingface-transformers) or [this one](https://github.com/huggingface/transformers/issues/676#issuecomment-502101078) but unsuccesfully but nothing worked. I keep running into error after error.\r\n\r\nHas anyone managed to get this working any other way?",
"I managed to get it working by going through the pointers in debug mode and checking what variable name corresponded to what. This is the function I ended up using.\r\n\r\n```\r\ndef convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path):\r\n config_path = os.path.abspath(bert_config_file)\r\n tf_path = os.path.abspath(tf_checkpoint_path)\r\n print(\"Converting TensorFlow checkpoint from {} with config at {}\".format(tf_path, config_path))\r\n # Load weights from TF model\r\n init_vars = tf.train.list_variables(tf_path)\r\n excluded = [\"BERTAdam\", \"_power\", \"global_step\", \"_CHECKPOINTABLE_OBJECT_GRAPH\"]\r\n init_vars = list(filter(lambda x: all([True if e not in x[0] else False for e in excluded]), init_vars))\r\n names = []\r\n arrays = []\r\n for name, shape in init_vars:\r\n print(\"Loading TF weight {} with shape {}\".format(name, shape))\r\n array = tf.train.load_variable(tf_path, name)\r\n names.append(name)\r\n arrays.append(array)\r\n\r\n config = BertConfig.from_json_file(bert_config_file)\r\n print(\"Building PyTorch model from configuration: {}\".format(str(config)))\r\n # Initialise PyTorch model\r\n model = BertForSequenceClassification(config)\r\n\r\n for name, array in zip(names, arrays):\r\n name = name.split(\"/\")\r\n # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v\r\n # which are not required for using pretrained model\r\n if any(n in [\"adam_v\", \"adam_m\", \"global_step\", \"bad_steps\", \"global_step\", \"good_steps\", \"loss_scale\",\r\n \"AdamWeightDecayOptimizer\", \"AdamWeightDecayOptimizer_1\", \"save_counter\", \".OPTIMIZER_SLOT\"] for n in name) or \\\r\n name[0] == \"optimizer\":\r\n print(\"Skipping {}\".format(\"/\".join(name)))\r\n continue\r\n if \".OPTIMIZER_SLOT\" in name:\r\n idx = name.index(\".OPTIMIZER_SLOT\")\r\n name = name[:idx]\r\n elif \".ATTRIBUTES\" in name:\r\n idx = name.index(\".ATTRIBUTES\")\r\n name = name[:idx]\r\n print(name)\r\n pointer = model\r\n for m_name in name:\r\n if re.fullmatch(r\"[A-Za-z]+_\\d+\", m_name):\r\n scope_names = re.split(r\"_(\\d+)\", m_name)\r\n else:\r\n scope_names = [m_name]\r\n if scope_names[0] == \"kernel\" or scope_names[0] == \"gamma\":\r\n pointer = getattr(pointer, \"weight\")\r\n elif scope_names[0] == \"output_bias\" or scope_names[0] == \"beta\":\r\n pointer = getattr(pointer, \"bias\")\r\n elif scope_names[0] == \"output_weights\":\r\n pointer = getattr(pointer, \"weight\")\r\n elif scope_names[0] == \"squad\":\r\n pointer = getattr(pointer, \"classifier\")\r\n elif scope_names[0] == \"dense_output\" or scope_names[0] == \"bert_output\":\r\n pointer = getattr(pointer, \"output\")\r\n elif scope_names[0] == \"self_attention\":\r\n pointer = getattr(pointer, \"self\")\r\n else:\r\n try:\r\n pointer = getattr(pointer, scope_names[0])\r\n except AttributeError:\r\n logger.info(\"Skipping {}\".format(\"/\".join(name)))\r\n continue\r\n if len(scope_names) >= 2:\r\n num = int(scope_names[1])\r\n pointer = pointer[num]\r\n if m_name[-11:] == \"_embeddings\":\r\n pointer = getattr(pointer, \"weight\")\r\n elif m_name == \"kernel\" or m_name == \"gamma\" or m_name == \"output_weights\":\r\n array = np.transpose(array)\r\n # print(\"Initialize PyTorch weight {}\".format(name))\r\n pointer.data = torch.from_numpy(array)\r\n\r\n # Save pytorch-model\r\n print(\"Save PyTorch model to {}\".format(pytorch_dump_path))\r\n torch.save(model.state_dict(), pytorch_dump_path)\r\n\r\n\r\nconvert_tf_checkpoint_to_pytorch(tf_path, config_path, pytorch_dump_path)\r\n```\r\n\r\n",
"Hi, this is an actual programming error in modeling_bert.py. If you look at line 145 it's pretty obvious that the code should be continuing to the next iteration of the *outer* loop (over name, array) rather than the inner one (over the path components of name) - otherwise why would the error messages say \"skipping {name}\":\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L145\r\n\r\nTo fix this, simply extract the try/except block so that it wraps the entire loop (lines 127-148). I would supply a patch but I have to work with transformers 3.5.1 for the moment since I'm using sentence-transformers which hasn't been updated to the latest version.",
"@thomwolf If the above fix will be added to the master branch this will be great\r\nhttps://github.com/smartshark/transformers/pull/1\r\n",
"> \r\n\r\nI revised `modeling_bert.by` following @lrizzello 's code and could save tf1 checkpoint I personally trained into pytorch. I first changed tf1 checkpoint to tf2, and then used the below code. Here is the code I revised in `modeling_bert.py`\r\n \r\n```\r\ndef load_tf_weights_in_bert(model, config, tf_checkpoint_path):\r\n try:\r\n import re\r\n import numpy as np\r\n import tensorflow as tf\r\n except ImportError:\r\n logger.error(\r\n \"Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see \"\r\n \"https://www.tensorflow.org/install/ for installation instructions.\"\r\n )\r\n raise\r\n tf_path = os.path.abspath(tf_checkpoint_path)\r\n logger.info(f\"Converting TensorFlow checkpoint from {tf_path}\")\r\n # Load weights from TF model\r\n init_vars = tf.train.list_variables(tf_path)\r\n names = []\r\n arrays = []\r\n for name, shape in init_vars:\r\n logger.info(f\"Loading TF weight {name} with shape {shape}\")\r\n array = tf.train.load_variable(tf_path, name)\r\n names.append(name)\r\n arrays.append(array)\r\n for name, array in zip(names, arrays):\r\n name = name.split(\"/\")\r\n # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v\r\n # which are not required for using pretrained model\r\n if any(\r\n [\"adam_v\", \"adam_m\", \"global_step\", \"bad_steps\", \"global_step\", \"good_steps\", \"loss_scale\",\r\n \"AdamWeightDecayOptimizer\", \"AdamWeightDecayOptimizer_1\", \"save_counter\", \".OPTIMIZER_SLOT\"] for n in name) or \\\r\n name[0] == \"optimizer\":\r\n # n in [\"adam_v\", \"adam_m\", \"AdamWeightDecayOptimizer\", \"AdamWeightDecayOptimizer_1\", \"global_step\"]\r\n # for n in name\r\n # ):\r\n logger.info(f\"Skipping {'/'.join(name)}\")\r\n continue\r\n if \".OPTIMIZER_SLOT\" in name:\r\n idx = name.index(\".OPTIMIZER_SLOT\")\r\n name = name[:idx]\r\n elif \".ATTRIBUTES\" in name:\r\n idx = name.index(\".ATTRIBUTES\")\r\n name = name[:idx]\r\n print(name)\r\n pointer = model\r\n for m_name in name:\r\n if re.fullmatch(r\"[A-Za-z]+_\\d+\", m_name):\r\n scope_names = re.split(r\"_(\\d+)\", m_name)\r\n else:\r\n scope_names = [m_name]\r\n if scope_names[0] == \"kernel\" or scope_names[0] == \"gamma\":\r\n pointer = getattr(pointer, \"weight\")\r\n elif scope_names[0] == \"output_bias\" or scope_names[0] == \"beta\":\r\n pointer = getattr(pointer, \"bias\")\r\n elif scope_names[0] == \"output_weights\":\r\n pointer = getattr(pointer, \"weight\")\r\n elif scope_names[0] == \"squad\":\r\n pointer = getattr(pointer, \"classifier\")\r\n elif scope_names[0] == \"dense_output\" or scope_names[0] == \"bert_output\":\r\n pointer = getattr(pointer, \"output\")\r\n elif scope_names[0] == \"self_attention\":\r\n pointer = getattr(pointer, \"self\")\r\n else:\r\n try:\r\n pointer = getattr(pointer, scope_names[0])\r\n except AttributeError:\r\n logger.info(\"Skipping {}\".format(\"/\".join(name)))\r\n continue\r\n if len(scope_names) >= 2:\r\n num = int(scope_names[1])\r\n pointer = pointer[num]\r\n if m_name[-11:] == \"_embeddings\":\r\n pointer = getattr(pointer, \"weight\")\r\n elif m_name == \"kernel\" or m_name == \"gamma\" or m_name == \"output_weights\":\r\n array = np.transpose(array)\r\n # try:\r\n # if pointer.shape != array.shape:\r\n # raise ValueError(f\"Pointer shape {pointer.shape} and array shape {array.shape} mismatched\")\r\n # except AssertionError as e:\r\n # e.args += (pointer.shape, array.shape)\r\n # raise\r\n logger.info(f\"Initialize PyTorch weight {name}\")\r\n pointer.data = torch.from_numpy(array)\r\n return model\r\n```\r\n\r\nFor convert_tf_to_pytorch function, I used below.\r\n\r\n```\r\nimport argparse\r\nimport os\r\nimport torch\r\n\r\nfrom transformers import BertConfig, BertForPreTraining, load_tf_weights_in_bert\r\nfrom transformers.utils import logging\r\n\r\nlogging.set_verbosity_info()\r\ndef convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path):\r\n # Initialise PyTorch model\r\n config = BertConfig.from_json_file(bert_config_file)\r\n print(f\"Building PyTorch model from configuration: {config}\")\r\n model = BertForPreTraining(config)\r\n # Load weights from tf checkpoint\r\n load_tf_weights_in_bert(model, config, tf_checkpoint_path)\r\n # Save pytorch-model\r\n os.makedirs(pytorch_dump_path)\r\n pytorch_dump_path = os.path.join(pytorch_dump_path, '0')\r\n print(f\"Save PyTorch model to {pytorch_dump_path}\")\r\n torch.save(model.state_dict(), pytorch_dump_path)\r\nif __name__ == \"__main__\":\r\n parser = argparse.ArgumentParser()\r\n # Required parameters\r\n parser.add_argument(\r\n \"--tf_checkpoint_path\", default=None, type=str, required=True, help=\"Path to the TensorFlow checkpoint path.\"\r\n )\r\n parser.add_argument(\r\n \"--bert_config_file\",\r\n default=None,\r\n type=str,\r\n required=True,\r\n help=\"The config json file corresponding to the pre-trained BERT model. \\n\"\r\n \"This specifies the model architecture.\",\r\n )\r\n parser.add_argument(\r\n \"--pytorch_dump_path\", default=None, type=str, required=True, help=\"Path to the output PyTorch model.\"\r\n )\r\n args = parser.parse_args()\r\n convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.bert_config_file, args.pytorch_dump_path)\r\n```\r\n\r\nHope this help!",
"I use @Jwmc999 's method and successfully converted tensoeflow 2.x ckpt file to Pytorch.bin.\r\nCheck the below jupyterbook if needed.\r\n[Converting-TF-ckpt-to-Pytorch-model](https://github.com/suchunxie/Converting-TF-ckpt-to-Pytorch-model) "
] | 1,553 | 1,665 | 1,571 | NONE | null | Is there any suggestion for fixing the following? I was trying "convert_tf_checkpoint_to_pytorch.py" to convert a model trained from scratch but the conversion didn't work out....
```bash
Skipping cls/seq_relationship/output_weights/adam_v
Traceback (most recent call last):
File "pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 66, in <module>
args.pytorch_dump_path)
File "pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 37, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_bert(model, tf_checkpoint_path)
File "/content/my_pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py", line 117, in load_tf_weights_in_bert
assert pointer.shape == array.shape
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 535, in __getattr__
type(self).__name__, name))
AttributeError: 'BertForPreTraining' object has no attribute 'shape'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/393/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/392 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/392/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/392/comments | https://api.github.com/repos/huggingface/transformers/issues/392/events | https://github.com/huggingface/transformers/pull/392 | 423,390,699 | MDExOlB1bGxSZXF1ZXN0MjYyOTcyNjkw | 392 | Add full language model fine-tuning | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Also, build_py2 has failed because I deliberately did not include Py2 compatibility - it's only a few months from end-of-life now. If you really want me to, I can go back and include it, but we should be trying to let it go by now!",
"This is really great @Rocketknight1!\r\nThanks for taking the time to make a very clear and informative README.\r\nI've fixed the formatting issues in the README, slightly re-worded it and added a link to it in the main README.\r\nI'm merging it now.\r\nGreat job!"
] | 1,553 | 1,553 | 1,553 | MEMBER | null | These scripts add language model fine-tuning that closely mirrors the training process in the original BERT repo. The old fine-tuning example has been renamed `simple_lm_finetuning.py`. The key difference is the old script did not merge sentences when creating training examples, and so tended to create short training examples with lots of padding tokens - this is explained more fully in the README. The new scripts follow the BERT repo approach, which concatenates sentences from both documents to pack the training example up to `max_seq_len`.
All the scripts for LM fine-tuning have been moved to a subfolder of `examples/` with an included README to explain what LM fine-tuning is and how to use them. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/392/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/392/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/392",
"html_url": "https://github.com/huggingface/transformers/pull/392",
"diff_url": "https://github.com/huggingface/transformers/pull/392.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/392.patch",
"merged_at": 1553684557000
} |
https://api.github.com/repos/huggingface/transformers/issues/391 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/391/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/391/comments | https://api.github.com/repos/huggingface/transformers/issues/391/events | https://github.com/huggingface/transformers/issues/391 | 423,059,915 | MDU6SXNzdWU0MjMwNTk5MTU= | 391 | Reproduce the results on CoLA | {
"login": "cooelf",
"id": 7037265,
"node_id": "MDQ6VXNlcjcwMzcyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7037265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cooelf",
"html_url": "https://github.com/cooelf",
"followers_url": "https://api.github.com/users/cooelf/followers",
"following_url": "https://api.github.com/users/cooelf/following{/other_user}",
"gists_url": "https://api.github.com/users/cooelf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cooelf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cooelf/subscriptions",
"organizations_url": "https://api.github.com/users/cooelf/orgs",
"repos_url": "https://api.github.com/users/cooelf/repos",
"events_url": "https://api.github.com/users/cooelf/events{/privacy}",
"received_events_url": "https://api.github.com/users/cooelf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Cola is probably one of the most unstable tasks for BERT. For us it mostly boiled down to running many seeds. If all you care about is a good pre-trained model checkpoint, we have a 65 / 61 run at https://github.com/zphang/bert_on_stilts ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@cooelf hi, can you reproduce cola score of BERT now? I work on this recently but still can't reach the reported score on the test set, even if restart with different seed multiple times, and I further noticed that improvement on dev set is inconsistent with that on the test set at all.",
"Thanks for all the feedbacks! \r\n@scissorsy The results of dev and test are indeed incosistent. I have changed the seeds, lr, warmup rates and max_epochs, and finally the test result reached about 60% for some runs. Using multi-tasking or transfer from bigger dataset such as MNLI seems to be more stable.\r\n\r\n",
"> Thanks for all the feedbacks!\r\n> @scissorsy The results of dev and test are indeed incosistent. I have changed the seeds, lr, warmup rates and max_epochs, and finally the test result reached about 60% for some runs. Using multi-tasking or transfer from bigger dataset such as MNLI seems to be more stable.\r\n\r\nThanks!",
"Hi @cooelf, what parameter number did you change in order to fit a better result, thanks!"
] | 1,553 | 1,563 | 1,560 | CONTRIBUTOR | null | I try to reproduce the CoLA results reported in the BERT paper but the numbers are far from the reported one. My best mcc (BERT large) for dev is 64.79% and the test result is 56.9% while the reported test result is 60.5%. The learning rate is 2e-5 and the total number of epochs is 5. For BERT base,the result is also lower by 3-5%.
As the paper said,
`for BERTLARGE we found that fine-tuning was sometimes unstable on small data sets (i.e., some runs would produce degenerate results), so we ran several random restarts and selected the model that performed best on the Dev set. `
I also tried several restarts with different learning rates and random seeds but it seems no improvement. I'm quite confused for the reproduction. Any suggestions would be greatly appreciated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/391/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/390 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/390/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/390/comments | https://api.github.com/repos/huggingface/transformers/issues/390/events | https://github.com/huggingface/transformers/issues/390 | 422,725,827 | MDU6SXNzdWU0MjI3MjU4Mjc= | 390 | 'NoneType' object with constructor | {
"login": "leelaylay",
"id": 12346371,
"node_id": "MDQ6VXNlcjEyMzQ2Mzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/12346371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leelaylay",
"html_url": "https://github.com/leelaylay",
"followers_url": "https://api.github.com/users/leelaylay/followers",
"following_url": "https://api.github.com/users/leelaylay/following{/other_user}",
"gists_url": "https://api.github.com/users/leelaylay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leelaylay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leelaylay/subscriptions",
"organizations_url": "https://api.github.com/users/leelaylay/orgs",
"repos_url": "https://api.github.com/users/leelaylay/repos",
"events_url": "https://api.github.com/users/leelaylay/events{/privacy}",
"received_events_url": "https://api.github.com/users/leelaylay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Me too.It is probably Network connection problem.",
"The network connection check has been relaxed in the now merged #500.\r\nIt will be included in the next PyPI release (probably next week).\r\nIn the meantime you can install from `master`.",
"@thomwolf Thank you.",
"The new release is on pypi!"
] | 1,553 | 1,556 | 1,556 | NONE | null | I run the code below and often get 'NoneType' object. (I usually run multiprocessing)
```python
model = BertModel.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/390/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/389 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/389/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/389/comments | https://api.github.com/repos/huggingface/transformers/issues/389/events | https://github.com/huggingface/transformers/pull/389 | 422,242,965 | MDExOlB1bGxSZXF1ZXN0MjYyMDc1MDc5 | 389 | Fix cosine schedule | {
"login": "lukovnikov",
"id": 1732910,
"node_id": "MDQ6VXNlcjE3MzI5MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1732910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukovnikov",
"html_url": "https://github.com/lukovnikov",
"followers_url": "https://api.github.com/users/lukovnikov/followers",
"following_url": "https://api.github.com/users/lukovnikov/following{/other_user}",
"gists_url": "https://api.github.com/users/lukovnikov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukovnikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukovnikov/subscriptions",
"organizations_url": "https://api.github.com/users/lukovnikov/orgs",
"repos_url": "https://api.github.com/users/lukovnikov/repos",
"events_url": "https://api.github.com/users/lukovnikov/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukovnikov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @lukovnikov, yes I think the various schedules you have added to your fork are very nice!\r\nDo you want to add them in this PR as well?\r\nOtherwise, I'll merge it.",
"Merging it for now. Thanks @lukovnikov ",
"Hi, sorry, lost track of this, will make a new PR soon."
] | 1,552 | 1,554 | 1,554 | CONTRIBUTOR | null | Fixing similar problem to #327 and #324 in cosine schedule.
Btw, do you think it would make sense to have [something like this](https://github.com/lukovnikov/pytorch-pretrained-BERT/blob/optim/pytorch_pretrained_bert/optimization.py) for both of your `optimization.py`'s? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/389/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/389",
"html_url": "https://github.com/huggingface/transformers/pull/389",
"diff_url": "https://github.com/huggingface/transformers/pull/389.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/389.patch",
"merged_at": 1554283304000
} |
https://api.github.com/repos/huggingface/transformers/issues/388 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/388/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/388/comments | https://api.github.com/repos/huggingface/transformers/issues/388/events | https://github.com/huggingface/transformers/pull/388 | 421,917,407 | MDExOlB1bGxSZXF1ZXN0MjYxODM5NTEw | 388 | Added remaining GLUE tasks to 'run_classifier.py' | {
"login": "ananyahjha93",
"id": 7491256,
"node_id": "MDQ6VXNlcjc0OTEyNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7491256?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ananyahjha93",
"html_url": "https://github.com/ananyahjha93",
"followers_url": "https://api.github.com/users/ananyahjha93/followers",
"following_url": "https://api.github.com/users/ananyahjha93/following{/other_user}",
"gists_url": "https://api.github.com/users/ananyahjha93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ananyahjha93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ananyahjha93/subscriptions",
"organizations_url": "https://api.github.com/users/ananyahjha93/orgs",
"repos_url": "https://api.github.com/users/ananyahjha93/repos",
"events_url": "https://api.github.com/users/ananyahjha93/events{/privacy}",
"received_events_url": "https://api.github.com/users/ananyahjha93/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @ananyahjha93,\r\nThanks for this PR.\r\nDo you have some results from fine-tuning BERT on the other tasks?\r\nAlso, I think we should add some details on the available tasks in the readme as well.",
"@thomwolf I have added results on GLUE dev set in the README and details on how to run any GLUE task. But, I have also added a warning stating that all GLUE tasks have not been tested with half-precision training. With the new tasks and metrics being added, there should be one round of tests with half-precision training as well. Unfortunately, I do not have direct access to a V100 or any of the RTX cards in order to run that. \r\n\r\nAlso, for some reason the previous pull request was passing each build_py2 test but now it is failing the 'OpenAIGPTTokenizationTest'. I only added changes to the README in my latest commit.",
"This looks great, thanks @ananyahjha93!"
] | 1,552 | 1,553 | 1,553 | CONTRIBUTOR | null | Also added metrics used in the GLUE paper for each task. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/388/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/388",
"html_url": "https://github.com/huggingface/transformers/pull/388",
"diff_url": "https://github.com/huggingface/transformers/pull/388.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/388.patch",
"merged_at": 1553760413000
} |
https://api.github.com/repos/huggingface/transformers/issues/387 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/387/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/387/comments | https://api.github.com/repos/huggingface/transformers/issues/387/events | https://github.com/huggingface/transformers/issues/387 | 421,899,003 | MDU6SXNzdWU0MjE4OTkwMDM= | 387 | run_squad.py cannot predict only | {
"login": "lixinsu",
"id": 15691697,
"node_id": "MDQ6VXNlcjE1NjkxNjk3",
"avatar_url": "https://avatars.githubusercontent.com/u/15691697?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lixinsu",
"html_url": "https://github.com/lixinsu",
"followers_url": "https://api.github.com/users/lixinsu/followers",
"following_url": "https://api.github.com/users/lixinsu/following{/other_user}",
"gists_url": "https://api.github.com/users/lixinsu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lixinsu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lixinsu/subscriptions",
"organizations_url": "https://api.github.com/users/lixinsu/orgs",
"repos_url": "https://api.github.com/users/lixinsu/repos",
"events_url": "https://api.github.com/users/lixinsu/events{/privacy}",
"received_events_url": "https://api.github.com/users/lixinsu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It is better to imitate original Bert repo to add separate argument `args.vocab_file`, and during prediction, argument `bert_model` is the directory containing the fine-tuned model. ",
"Make sense indeed. Would you like to submit a PR on that?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,552 | 1,558 | 1,558 | NONE | null | Existing code cannot load fine-tuned model properly.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/f3e5404880902a1bdfed2b1d47d10a6c672dc430/examples/run_squad.py#L1011-L1025 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/387/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/386 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/386/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/386/comments | https://api.github.com/repos/huggingface/transformers/issues/386/events | https://github.com/huggingface/transformers/pull/386 | 421,885,490 | MDExOlB1bGxSZXF1ZXN0MjYxODIwNzEy | 386 | Shared tokenizer interface | {
"login": "CatalinVoss",
"id": 332459,
"node_id": "MDQ6VXNlcjMzMjQ1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/332459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CatalinVoss",
"html_url": "https://github.com/CatalinVoss",
"followers_url": "https://api.github.com/users/CatalinVoss/followers",
"following_url": "https://api.github.com/users/CatalinVoss/following{/other_user}",
"gists_url": "https://api.github.com/users/CatalinVoss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CatalinVoss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CatalinVoss/subscriptions",
"organizations_url": "https://api.github.com/users/CatalinVoss/orgs",
"repos_url": "https://api.github.com/users/CatalinVoss/repos",
"events_url": "https://api.github.com/users/CatalinVoss/events{/privacy}",
"received_events_url": "https://api.github.com/users/CatalinVoss/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi Catalin, thanks for that!\r\nYes backward compatibility is an important things to keep in mind here.\r\nI think the changes look nice.\r\nWe should also:\r\n- document them in the readme, and\r\n- add associated tests in the relevant test files.",
"Just before this gets merged - I've noticed that the GPT(1) tokenizer ignores whether or not a string contains special tokens, and hence doesn't encode them properly. \r\nWe've got around this by splitting `text` on whitespace inside the `tokenize` method and iterating, checking if a word is a special token or not, like so:\r\n```\r\ndef tokenize(self, text):\r\n split_tokens = []\r\n for t in text.split(' '):\r\n if t in self.special_tokens:\r\n split_tokens.extend([t])\r\n else:\r\n #current tokenization stuff\r\n```\r\nWould be nice if this could be included :)",
"@andrewPoulton we can include this, but only for the whitespace tokenizer fallback of GPT-1. The original tokenizer (SpaCy) would split tokens like `<\\w>` in pieces which is the reason it was not included originally.\r\n\r\nOverall I must say I'm not a huge fan of the not-split-specific-tokens feature (the `never_split` option). We've added it due to popular request but it is very dependent on the underlying behavior of the tokenizer and the character content of special tokens (does it contains spaces, dashes...) and from the issues it looks like a common source of bugs and un-intended behavior (see #410 for a latest example).",
"Hi @CatalinVoss, from the state of the PR I understand you are still working on it.\r\nMaybe ping me when you think it's ready for merging in master and I'll have a look again?",
"Hey @thomwolf yeah, just didn't get around to it yet, but then I needed the BERT decoding piece yesterday so I merged it in. It's imperfect. If we renamed everything to be consistent with words, tokens, token IDs, etc. we would have to change the method names, per my comment above. Do you want to do that? Otherwise perhaps better to do that in a separate PR and target some sort of v0.7 branch?",
"Ok we still have to add the docstrings, test and details in the readme for these methods.\r\nHaven't find time this week. I will see if I can find time next week. ",
"@CatalinVoss I also was wondering about this while writing some extra functionality for the GPT2 tokenizer to encode new strings for finetuning tasks, but ended up writing a `special_token_to_id` functionality for getting IDs of special tokens -- the pattern of usage of the GPT1 tokenizer for finetuning tasks seems to be to add the special token IDs after encoding the rest of the string to process.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hey @thomwolf, what was the decision on this? I can revisit if you want. We still wanted this in our fork… thanks much!",
"I never really had time to add tests and documentation to the PR but it's a good idea.\r\nLet's add this feature in the new release.",
"Looks like this was taken care of with your refactor in `tokenization_utils.py`. Very nice!!"
] | 1,552 | 1,563 | 1,561 | CONTRIBUTOR | null | Up to this point, `tokenize()` and `encode()` mean different places. In GPT-land, `tokenize` doesn't get us all the way to token IDs. Idteally, he tokenizers would share a common interface so that they can be plugged in and out of places just like the models.
I don't know if you want breaking changes, so I just created `encode()` and `decode()` as aliases on tokenizers that did not adhere to that spec already.
There is more cleanup to do. Since it seems that non-BERT models were added on later on, the BERT files should probably be renamed into `tokenizer_bert`, etc., but I left that in place to maintain compatibility. BERT is still missing `decode`.
Please advise and I can clean it up. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/386/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/386",
"html_url": "https://github.com/huggingface/transformers/pull/386",
"diff_url": "https://github.com/huggingface/transformers/pull/386.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/386.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/385 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/385/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/385/comments | https://api.github.com/repos/huggingface/transformers/issues/385/events | https://github.com/huggingface/transformers/issues/385 | 421,868,374 | MDU6SXNzdWU0MjE4NjgzNzQ= | 385 | pre-training a BERT from scratch | {
"login": "chiyuzhang94",
"id": 33407613,
"node_id": "MDQ6VXNlcjMzNDA3NjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/33407613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiyuzhang94",
"html_url": "https://github.com/chiyuzhang94",
"followers_url": "https://api.github.com/users/chiyuzhang94/followers",
"following_url": "https://api.github.com/users/chiyuzhang94/following{/other_user}",
"gists_url": "https://api.github.com/users/chiyuzhang94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chiyuzhang94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chiyuzhang94/subscriptions",
"organizations_url": "https://api.github.com/users/chiyuzhang94/orgs",
"repos_url": "https://api.github.com/users/chiyuzhang94/repos",
"events_url": "https://api.github.com/users/chiyuzhang94/events{/privacy}",
"received_events_url": "https://api.github.com/users/chiyuzhang94/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"We can't now. The code is still incomplete. Is it possible recently? I really want to help but not familiar with tensorflow.",
"A related issue is #376.\r\n\r\nHowever, pytorch-pretraned-BERT was mostly designed to provide easy and fast access to pretrained models.\r\n\r\nIf you want to train a BERT model from scratch you will need a more robust code base for training and data-processing than the simple examples that are provided in this repo.\r\n\r\nI would probably advise to move to a more integrated codebase like the nice [XLM repo](https://github.com/facebookresearch/XLM) of @glample and @aconneau.",
"I've been able to use the codebase for this, and didn't see much issues, however I might be overlooking something. If you construct and initialize a new model instead of loading from pretrained, you can use the `simple_lm_finetuning` script to train on new data.\r\n\r\nThomas, did you have any specific other issues in mind? ",
"NVidia recently [released](https://medium.com/future-vision/bert-meets-gpus-403d3fbed848?fbclid=IwAR0bFskUVVKDRyYF-9cQGgRXeq7dTvteGHi10HaTG5zI7_eE8oW-BfrxYQw) TF and PyTorch code to pretrain Bert from scratch. I wrapped it in a script to launch on multiple machines on AWS [here](https://github.com/cybertronai/Megatron-LM/blob/master/launch_pretrain_bert.py). Currently I'm still figuring out why the 64-GPU AWS throughput is 2x worse than what they are getting locally",
"Thanks @yaroslavvb!",
"Thanks! @yaroslavvb",
"@yaroslavvb [this article](https://medium.com/the-mission/why-building-your-own-deep-learning-computer-is-10x-cheaper-than-aws-b1c91b55ce8c) explains why cloud computing can have inconsistent throughput. I think it's a great read, and I've been working on setting up my own rig.\r\n\r\nI see in [the script](https://github.com/cybertronai/Megatron-LM/blob/master/launch_pretrain_bert.py#L49) that you're using 8 GPUs. have long is the pretraining taking with that? I'm not sure whether to go with gcloud TPUs or AWS. the Bert readme said that a single TPU will take up to 2 weeks to finish pretaining..",
"@yaroslavvb hi, did you train bert successfully? I trained it with https://github.com/NVIDIA/Megatron-LM/scripts/pretrain_bert_tfrecords_distributed.sh on 2 machines with 16 GPUS, but when it was sotpped after ' > number of parameters: 336226108' and i got nothing else after that, the GPU-Util is 0%.",
"@MarvinLong yes, I was able to launch it on multiple machines and observe the model training, and it's about 600ms per step. I did not try training it to completion as the scaling efficiency on p3dn instances on AWS is only about 50% because of NCCL bug currently. I'm wondering if your machines can't communicate to each other on the right ports. @jrc2139 I have not observed inconsistent throughput, I've used this [codebase](https://github.com/cybertronai/imagenet18) to train imagenet in 19 minutes on 64 GPUs on AWS p3 instances.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> I've been able to use the codebase for this, and didn't see much issues, however I might be overlooking something. If you construct and initialize a new model instead of loading from pretrained, you can use the `simple_lm_finetuning` script to train on new data.\r\n> \r\n> Thomas, did you have any specific other issues in mind?\r\n\r\nI'm trying to train on my own custom data and I'm a bit confused about how to \"construct and initialize a new model\"—i.e., when not working with pretrained models. Any help appreciated.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@yaroslavvb Hi, I can launch Megatron-LM to pretrain bert, but my MLM loss stay around 6.8. How about you? Can you pretrain BERT successfully?",
"> @yaroslavvb Hi, I can launch Megatron-LM to pretrain bert, but my MLM loss stay around 6.8. How about you? Can you pretrain BERT successfully?\r\n\r\nI was able to pre-train using this repo [https://github.com/google-research/bert]. However, even with one million step, the MLM accuracy was 64.69% and it's loss was 2.4. I am eager to know if someone else has pre-trained and got MLM accuracy higher than this.",
"> > @yaroslavvb Hi, I can launch Megatron-LM to pretrain bert, but my MLM loss stay around 6.8. How about you? Can you pretrain BERT successfully?\r\n> \r\n> I was able to pre-train using this repo [https://github.com/google-research/bert]. However, even with one million step, the MLM accuracy was 64.69% and it's loss was 2.4. I am eager to know if someone else has pre-trained and got MLM accuracy higher than this.\r\n\r\nAccording to the pretrian log from gloun-nlp[https://github.com/dmlc/web-data/blob/master/gluonnlp/logs/bert/bert_base_pretrain.log](url), your MLM accuracy seems right though with a higher loss. I think you can try to check it with fintuning. \r\n",
"@ibrahimishag I want to know if you pretrain your BERT with Bookscorpus. I cannot find a copy of that. For my pretraining, my bert loss is decreasing so so slowly after removing clip-grad-norm. There must be something wrong with me.",
"@JF-D I pre-trained on other domain-specific corpus.",
"Can someone please specify why Thomas mention/refers XLM repo from facebook? Is there any fault from huggingface? I thought I would just use hugging face repo without using \"pretrained paramater\" they generously provided for us. \r\n\r\nJust struggling with Facebook repo\"span bert\" and seems it is hard to even run this due to distributed launch issue. Hope it is ok to use hugging face's one to reproduce paper result",
"Is it possible to train from scratch using the run_language_modeling.py code? does hugging face support training from scratch. I looked at this example https://huggingface.co/blog/how-to-train but this thread is hitting that training from scratch is not currently supported.",
"Any update on training from scratch BERT-like models with huggingface? ",
"Yes this has been supported for close to a year now ;)",
"@julien-c Thanks. I really appreacite the prompt response.\r\n\r\nIs there any tutorial/example specifically for BERT (/ALBERT) pretraining ?",
"Pretraining from scratch is a very rigid demand for users.",
"> @julien-c Thanks. I really appreacite the prompt response.\r\n> \r\n> Is there any tutorial/example specifically for BERT (/ALBERT) pretraining ?\r\n\r\nwait example",
"This is all there is to pretraining:\r\n```\r\nimport os\r\n\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\r\n\r\nfrom pathlib import Path\r\nfrom transformers import BertTokenizer\r\nfrom tokenizers.processors import BertProcessing\r\nfrom transformers import RobertaConfig\r\nfrom transformers import RobertaForMaskedLM\r\nfrom transformers import LineByLineTextDataset\r\nfrom transformers import DataCollatorForLanguageModeling\r\nfrom transformers import Trainer, TrainingArguments\r\nimport torch\r\n\r\ntokenizer = BertTokenizer('./data/vocab.txt')\r\n\r\ntokens = tokenizer.encode(\"b140 m33 c230\")\r\nprint('token ids: {}'.format(tokens))\r\n\r\nconfig = RobertaConfig(\r\n vocab_size=1458,\r\n max_position_embeddings=130,\r\n hidden_size=384,\r\n intermediate_size=1536,\r\n num_attention_heads=4,\r\n num_hidden_layers=4,\r\n type_vocab_size=1,\r\n)\r\n\r\n# FROM SCRATCH\r\nmodel = RobertaForMaskedLM(config=config)\r\n\r\n# CONTINUE TRAINING -- i.e., just load your saved model using \"from_pretrained\"\r\n# model = RobertaForMaskedLM.from_pretrained('./trained_model')\r\n\r\nprint(model.num_parameters())\r\n\r\n# We should save this dataset since it's a bit slow to build each time\r\ndataset = LineByLineTextDataset(\r\n tokenizer=tokenizer,\r\n file_path=\"./data/my_data.txt\",\r\n block_size=128,\r\n)\r\n\r\ndata_collator = DataCollatorForLanguageModeling(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15\r\n)\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./out/my_run\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=100,\r\n per_device_train_batch_size=128,\r\n save_steps=100,\r\n save_total_limit=2,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=dataset,\r\n prediction_loss_only=True,\r\n)\r\n\r\ntrainer.train()\r\n\r\ntrainer.save_model(\"./trained_model\")\r\n```\r\n\r\nNote that this is a small model, with a specialized, fixed vocabulary, so I'm using the old BERT tokenizer I had working from a previous project. For \"real\" languages you'd use one of the RobertaTokenizer options.\r\n\r\nI'm just getting back to this project after being away for a while, and I'm noticing I'm getting a warning about switching to the Datasets Library. I'll do that at some point, but it's working for now so I won't mess with it.\r\nAlso, I'm curious if anyone can tell me how to set the maximum length of inputs, so that longer inputs truncate?\r\n\r\nUPDATE: Duh, sorry, looks like `tokenizer.encode()` takes `max_length` and `truncation` parameters. Simple.",
"One question; I'm noticing that creating the dataset...\r\n```\r\ndataset = LineByLineTextDataset(\r\n tokenizer=tokenizer,\r\n file_path=\"./data/my_data.txt\",\r\n block_size=128,\r\n)\r\n```\r\n...is taking a long time. Is it possible to save that as a file, to avoid the wait when I (re)run training?",
"Hi, Is there any specifications of how to generate dataset for \"pretraining from scratch\" with raw texts ?",
"> One question; I'm noticing that creating the dataset...\r\n> \r\n> ```\r\n> dataset = LineByLineTextDataset(\r\n> tokenizer=tokenizer,\r\n> file_path=\"./data/my_data.txt\",\r\n> block_size=128,\r\n> )\r\n> ```\r\n> \r\n> ...is taking a long time. Is it possible to save that as a file, to avoid the wait when I (re)run training?\r\n\r\nthe same question",
"Detailed Tutorial\r\nhttps://mlcom.github.io/Create-Language-Model/"
] | 1,552 | 1,612 | 1,569 | NONE | null | I am wondering whether I can train a new BERT from scratch with this pytorch BERT. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/385/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/384 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/384/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/384/comments | https://api.github.com/repos/huggingface/transformers/issues/384/events | https://github.com/huggingface/transformers/issues/384 | 421,794,835 | MDU6SXNzdWU0MjE3OTQ4MzU= | 384 | Incrementally Train BERT with minimum QnA records - to get improved results | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Wondering Noone is replying\n\nOn Sat, 25 May, 2019, 4:09 PM stale[bot], <[email protected]> wrote:\n\n> Closed #384\n> <https://github.com/huggingface/pytorch-pretrained-BERT/issues/384>.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-pretrained-BERT/issues/384?email_source=notifications&email_token=AHRBKICW2VUPHXBBGSQD5O3PXEJN5A5CNFSM4G666UB2YY3PNVWWK3TUL52HS4DFWZEXG43VMVCXMZLOORHG65DJMZUWGYLUNFXW5KTDN5WW2ZLOORPWSZGORUM5MZY#event-2367280743>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AHRBKIG2BUNIOBDUV6QGHQDPXEJN5ANCNFSM4G666UBQ>\n> .\n>\n"
] | 1,552 | 1,558 | 1,558 | NONE | null | This question is posted in stackexchange too, but is pointing to BERT group:
https://datascience.stackexchange.com/questions/47406/incrementally-train-bert-with-minimum-qna-records
Question is after training on my data on some new questions and answers, new checkpoints are generated. With new checkpoints, when asked same question the answer is not correct, and is wrong again. Why is training not helping making answer right?
Though the questions are point to tensorflow version, but same is tried in pytorch version too and results are same. Some experts on BERT or transformers or neural network probably can better point the issue.
Details:
We are using Google BERT for Question and Answering. We have fine tuned BERT with SQUAD QnA release train data set (https://github.com/google-research/bert , https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json)
It generated new checkpoints and BERT is giving good answers for most of questions we asked on our text documents. However, there are some questions which it is answering wrong, so we are trying to further fine tune with our Question and known answer on our text document. We further trained based on last generated checkpoint and got new checkpoint.
With new checkpoint when we are asking the same question, the answer did not got corrected! Previously BERT was giving wrong answer with 99% confidence and now also giving same wrong answer with 95% confidence.
Can someone suggest, if they have same or similar experience, and suggest please.
Following are questions in BERT github Issues, and are unanswered for quite some time:
BERT accuracy reduced after providing custom training..The answer is also not correct : https://github.com/google-research/bert/issues/492
Unable to incrementally train BERT with custom training: https://github.com/google-research/bert/issues/482
Little training has no impact: https://github.com/google-research/bert/issues/481
Custom Domain Training: https://github.com/google-research/bert/issues/498 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/384/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/383 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/383/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/383/comments | https://api.github.com/repos/huggingface/transformers/issues/383/events | https://github.com/huggingface/transformers/pull/383 | 421,762,773 | MDExOlB1bGxSZXF1ZXN0MjYxNzQzODU2 | 383 | pull from original | {
"login": "perfmjs",
"id": 3114391,
"node_id": "MDQ6VXNlcjMxMTQzOTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3114391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/perfmjs",
"html_url": "https://github.com/perfmjs",
"followers_url": "https://api.github.com/users/perfmjs/followers",
"following_url": "https://api.github.com/users/perfmjs/following{/other_user}",
"gists_url": "https://api.github.com/users/perfmjs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/perfmjs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/perfmjs/subscriptions",
"organizations_url": "https://api.github.com/users/perfmjs/orgs",
"repos_url": "https://api.github.com/users/perfmjs/repos",
"events_url": "https://api.github.com/users/perfmjs/events{/privacy}",
"received_events_url": "https://api.github.com/users/perfmjs/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,552 | 1,552 | 1,552 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/383/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/383",
"html_url": "https://github.com/huggingface/transformers/pull/383",
"diff_url": "https://github.com/huggingface/transformers/pull/383.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/383.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/382 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/382/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/382/comments | https://api.github.com/repos/huggingface/transformers/issues/382/events | https://github.com/huggingface/transformers/issues/382 | 421,646,528 | MDU6SXNzdWU0MjE2NDY1Mjg= | 382 | fp16 overflow in GPT-2 | {
"login": "andrewPoulton",
"id": 25584650,
"node_id": "MDQ6VXNlcjI1NTg0NjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/25584650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andrewPoulton",
"html_url": "https://github.com/andrewPoulton",
"followers_url": "https://api.github.com/users/andrewPoulton/followers",
"following_url": "https://api.github.com/users/andrewPoulton/following{/other_user}",
"gists_url": "https://api.github.com/users/andrewPoulton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andrewPoulton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andrewPoulton/subscriptions",
"organizations_url": "https://api.github.com/users/andrewPoulton/orgs",
"repos_url": "https://api.github.com/users/andrewPoulton/repos",
"events_url": "https://api.github.com/users/andrewPoulton/events{/privacy}",
"received_events_url": "https://api.github.com/users/andrewPoulton/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @andrewPoulton, yes indeed we could update that for GPT-2, would be happy to get a PR.\r\nCan you check the generations are identical for a few seeds (it should be)?",
"Yeah, sure - what generations do you mean?",
"Fixed with #495"
] | 1,552 | 1,555 | 1,555 | NONE | null | When trying to train in mixed precision, after casting model weights to fp16 overflow is bound to occur since multiplication by 1e10 is used to mask the attention weights.
I noticed BERT multiplies by 1e4 (within fp16 range) instead, and the overflow problem doesn't occur and now it's training happily :)
I'm happy to make the various changes and PR if wanted? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/382/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/381 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/381/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/381/comments | https://api.github.com/repos/huggingface/transformers/issues/381/events | https://github.com/huggingface/transformers/pull/381 | 421,481,924 | MDExOlB1bGxSZXF1ZXN0MjYxNTIxOTAw | 381 | Added missing imports. | {
"login": "e-tornike",
"id": 20404466,
"node_id": "MDQ6VXNlcjIwNDA0NDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/20404466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-tornike",
"html_url": "https://github.com/e-tornike",
"followers_url": "https://api.github.com/users/e-tornike/followers",
"following_url": "https://api.github.com/users/e-tornike/following{/other_user}",
"gists_url": "https://api.github.com/users/e-tornike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-tornike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-tornike/subscriptions",
"organizations_url": "https://api.github.com/users/e-tornike/orgs",
"repos_url": "https://api.github.com/users/e-tornike/repos",
"events_url": "https://api.github.com/users/e-tornike/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-tornike/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @tseretelitornike!"
] | 1,552 | 1,552 | 1,552 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/381/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/381",
"html_url": "https://github.com/huggingface/transformers/pull/381",
"diff_url": "https://github.com/huggingface/transformers/pull/381.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/381.patch",
"merged_at": 1552650881000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/380 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/380/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/380/comments | https://api.github.com/repos/huggingface/transformers/issues/380/events | https://github.com/huggingface/transformers/pull/380 | 420,913,074 | MDExOlB1bGxSZXF1ZXN0MjYxMDg2NjQ4 | 380 | typo in annotation | {
"login": "yongbowin",
"id": 20198500,
"node_id": "MDQ6VXNlcjIwMTk4NTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/20198500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yongbowin",
"html_url": "https://github.com/yongbowin",
"followers_url": "https://api.github.com/users/yongbowin/followers",
"following_url": "https://api.github.com/users/yongbowin/following{/other_user}",
"gists_url": "https://api.github.com/users/yongbowin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yongbowin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yongbowin/subscriptions",
"organizations_url": "https://api.github.com/users/yongbowin/orgs",
"repos_url": "https://api.github.com/users/yongbowin/repos",
"events_url": "https://api.github.com/users/yongbowin/events{/privacy}",
"received_events_url": "https://api.github.com/users/yongbowin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,552 | 1,552 | 1,552 | CONTRIBUTOR | null | modify `heruistic` to `heuristic` in line 660, `charcter` to `character` in line 661. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/380/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/380",
"html_url": "https://github.com/huggingface/transformers/pull/380",
"diff_url": "https://github.com/huggingface/transformers/pull/380.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/380.patch",
"merged_at": 1552575400000
} |
https://api.github.com/repos/huggingface/transformers/issues/379 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/379/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/379/comments | https://api.github.com/repos/huggingface/transformers/issues/379/events | https://github.com/huggingface/transformers/pull/379 | 420,900,820 | MDExOlB1bGxSZXF1ZXN0MjYxMDc3MzI3 | 379 | typo | {
"login": "yongbowin",
"id": 20198500,
"node_id": "MDQ6VXNlcjIwMTk4NTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/20198500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yongbowin",
"html_url": "https://github.com/yongbowin",
"followers_url": "https://api.github.com/users/yongbowin/followers",
"following_url": "https://api.github.com/users/yongbowin/following{/other_user}",
"gists_url": "https://api.github.com/users/yongbowin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yongbowin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yongbowin/subscriptions",
"organizations_url": "https://api.github.com/users/yongbowin/orgs",
"repos_url": "https://api.github.com/users/yongbowin/repos",
"events_url": "https://api.github.com/users/yongbowin/events{/privacy}",
"received_events_url": "https://api.github.com/users/yongbowin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,552 | 1,552 | 1,552 | CONTRIBUTOR | null | modify `mull` to `null` in line 474 annotation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/379/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/379",
"html_url": "https://github.com/huggingface/transformers/pull/379",
"diff_url": "https://github.com/huggingface/transformers/pull/379.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/379.patch",
"merged_at": 1552555039000
} |
https://api.github.com/repos/huggingface/transformers/issues/378 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/378/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/378/comments | https://api.github.com/repos/huggingface/transformers/issues/378/events | https://github.com/huggingface/transformers/pull/378 | 420,898,473 | MDExOlB1bGxSZXF1ZXN0MjYxMDc1NTc4 | 378 | Add absolute imports to GPT, GPT-2, Transfo-XL and and fix empty nbest_predictions.json | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,552 | 1,566 | 1,552 | MEMBER | null | Fix #377
Fix #374 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/378/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/378",
"html_url": "https://github.com/huggingface/transformers/pull/378",
"diff_url": "https://github.com/huggingface/transformers/pull/378.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/378.patch",
"merged_at": 1552554048000
} |
https://api.github.com/repos/huggingface/transformers/issues/377 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/377/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/377/comments | https://api.github.com/repos/huggingface/transformers/issues/377/events | https://github.com/huggingface/transformers/issues/377 | 420,722,840 | MDU6SXNzdWU0MjA3MjI4NDA= | 377 | Empty nbest_predictions.json for run_squad.py | {
"login": "luyang-ai4med",
"id": 15113700,
"node_id": "MDQ6VXNlcjE1MTEzNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/15113700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luyang-ai4med",
"html_url": "https://github.com/luyang-ai4med",
"followers_url": "https://api.github.com/users/luyang-ai4med/followers",
"following_url": "https://api.github.com/users/luyang-ai4med/following{/other_user}",
"gists_url": "https://api.github.com/users/luyang-ai4med/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luyang-ai4med/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luyang-ai4med/subscriptions",
"organizations_url": "https://api.github.com/users/luyang-ai4med/orgs",
"repos_url": "https://api.github.com/users/luyang-ai4med/repos",
"events_url": "https://api.github.com/users/luyang-ai4med/events{/privacy}",
"received_events_url": "https://api.github.com/users/luyang-ai4med/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Good catch! Do you want to submit a PR? Otherwise, I'll fix it in the next release.",
"Hi @thomwolf \r\nThe issue still persists, there were two extra indentations and you removed only one to move the line out of inner if-else but, one more indentation should be removed to bring [L620](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py#L620) out of outer if-else. Doing this produces non-empty `nbest_predictions.json` file."
] | 1,552 | 1,559 | 1,552 | NONE | null | This is due to extra indentation on line 623 in run_squad.py
It should be outside of the "if else" loop. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/377/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/376 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/376/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/376/comments | https://api.github.com/repos/huggingface/transformers/issues/376/events | https://github.com/huggingface/transformers/issues/376 | 420,585,426 | MDU6SXNzdWU0MjA1ODU0MjY= | 376 | run_lm_finetuning generates short training cases | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry, I just realized this was mentioned in the [original PR](https://github.com/huggingface/pytorch-pretrained-BERT/pull/124).",
"Indeed. Happy to welcome a PR if you want to improve this example!",
"Working on it now! One question, though: It seems likely that I'll have to make significant changes. The reason for this is there is a significant random element in the concatenation of sentences, and so the exact number of training examples is difficult to know in advance. The concept of an index into the list of training examples also stops making sense because of this. This makes a simple implementation based on a Dataset object difficult.\r\n\r\nI can see two possible approaches to resolve this:\r\n1) Before training begins, the script does a pass over the dataset and pregenerates all training cases as InputExample objects. These could also be regenerated each epoch to increase diversity. The training cases could be packed into a Dataset object and sampled with RandomSampler, so we could still have meaningful progress bars. This is similar to Google's original implementation, where the data is pregenerated and stored in example files.\r\n\r\n2) The random sampling could occur on the fly at train time. This would avoid the time and storage needed for pregenerating training cases, but would become harder to measure the 'length' of an epoch in advance. \r\n\r\nI think either of these could be different enough that it might be better to implement it as a separate script (though it would still use a lot of the helper functions from the existing script). What do you think?",
"Hi @Rocketknight1, yes both solutions make sense and it seems better to have independent scripts indeed.\r\nMaybe you can start by drafting an independent script focusing on the pregenerated case and then see if the current `run_lm_finetuning` script can be updated for the on-the-fly case?",
"I think starting with pregenerated makes sense. However, maybe instead of replacing the old script, maybe we can keep that one as is? The original authors mentioned that concatenating sentences into training examples didn't make sense for their use-case. Possibly their dataset was something like chatbot conversations instead of long documents?\r\n\r\nEither way, they (and probably others) have use for the simple \"one sentence per example\" training system so I don't want to delete it entirely!",
"Sounds good to me!",
"@thomwolf I created PR #392 which includes the new functionality. "
] | 1,552 | 1,553 | 1,553 | MEMBER | null | In the original Tensorflow BERT repo, training cases for the Next Sentence task are generated by [concatenating multiple sentences](https://github.com/google-research/bert/blob/master/create_pretraining_data.py#L219) up to the maximum sequence length. In other words the "sentences" used are actually longer chunks of text split at sentence boundaries, which may include more than one sentence.
The LM finetuning example script doesn't do this, and just uses two single sentences as a training example, which means that most training examples are significantly shorter than max_seq_length. Would you like me to submit a patch to bring our implementation in line with theirs, or was leaving it out intentional? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/376/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/375 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/375/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/375/comments | https://api.github.com/repos/huggingface/transformers/issues/375/events | https://github.com/huggingface/transformers/issues/375 | 420,557,663 | MDU6SXNzdWU0MjA1NTc2NjM= | 375 | How to input the fine-tuned model? | {
"login": "jannenev",
"id": 11726563,
"node_id": "MDQ6VXNlcjExNzI2NTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/11726563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jannenev",
"html_url": "https://github.com/jannenev",
"followers_url": "https://api.github.com/users/jannenev/followers",
"following_url": "https://api.github.com/users/jannenev/following{/other_user}",
"gists_url": "https://api.github.com/users/jannenev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jannenev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jannenev/subscriptions",
"organizations_url": "https://api.github.com/users/jannenev/orgs",
"repos_url": "https://api.github.com/users/jannenev/repos",
"events_url": "https://api.github.com/users/jannenev/events{/privacy}",
"received_events_url": "https://api.github.com/users/jannenev/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"maybe you can take a look at the \"cache_dir\" argument. For the run_classifier.py file, it is located at line 498",
"I tried with --cache_dir , giving the fine-tunings output directory as cache_dir. \r\nI added these 2 files to the directory: bert_config.json and vocab.txt from the original bert_basic_uncased\r\n(finetune out folder has finetuned model pytorch_model.bin file, which I am not sure if it used at all)\r\n\r\nIt gave exactly same accuracy (by 16 digits) as direct train/eval run_classifier.py directly with bert_basic_uncased. It would seem that with --cache_dir, it saves the original bert_base_uncased to that given --cache_dir. I am not sure if there is another difference. \r\n\r\n(I am runnin a 3-label classifier, for which I used the SST-2 from GLUE as basis, saved data in same format and added 3rd label to code in run_classifier.py\r\n\r\npython run_classifier.py \\\r\n --task_name SST-2 \\\r\n --do_train \\\r\n --do_eval \\\r\n --do_lower_case \\\r\n --data_dir ~/git/x/data/input-sst2/ \\\r\n --bert_model bert-base-uncased \\\r\n --cache_dir out_finetune_140/ \\\r\n --max_seq_length 140 \\\r\n --train_batch_size 16 \\\r\n --learning_rate 2e-5 \\\r\n --num_train_epochs 3.0 \\\r\n --output_dir out_testcache/\r\n\r\n\r\n\r\n",
"I can get classifier running on finetuned model by replacing bert-base-uncased with output folder of fine-tuning: --bert_model out_finetune_140/ \\ (and adding bert_config.json , vocab.txt to that folder)\r\n\r\nBut as a result, eval_accuracy went down from 0.918 to 0.916. \r\n(I wonder is it correct to use vocab.txt and bert_config.json from original bert_base_uncased, or would fine-tuned model need updated ones?)\r\n\r\nstep1:\r\npython run_lm_finetuning.py \\\r\n --bert_model bert-base-uncased \\\r\n --do_lower_case \\\r\n --do_train \\\r\n --train_file ~/git/xdata/lm-file.txt \\\r\n --output_dir out_finetune_140/ \\\r\n --num_train_epochs 2.0 \\\r\n --learning_rate 3e-5 \\\r\n --train_batch_size 16 \\\r\n --max_seq_length 140 \\\r\n\r\nstep2:\r\npython run_classifier.py \\\r\n --task_name SST-2 \\\r\n --do_train \\\r\n --do_eval \\\r\n --do_lower_case \\\r\n --data_dir ~/git/x/data/input-sst2/ \\\r\n --bert_model out_finetune_140/ \\\r\n --max_seq_length 140 \\\r\n --train_batch_size 16 \\\r\n --learning_rate 2e-5 \\\r\n --num_train_epochs 3.0 \\\r\n --output_dir out_finetune+class_140/\r\n",
"Hi, I think I find a workaround for this issue. You can first load the original model, and then insert this line into your python file (for example, after line 607 and 610 in run_classifier.py): \r\nmodel.load_state_dict(torch.load(\"output_dir/pytorch_model.bin\"))\r\nthen the model will be your customized fine-tuned model. And there is no need to change anything else (for example, the config file or vocab file)",
"I would suggest that you add a separate logic to load your fine-tuned model and perform prediction. Your code will be very similar to the eval but you won't need (actually won't have access to) labels during prediction and hence no need of the accuracy code in eval etc. Simply collect your predictions in a list and write to a file called \"pred_results.txt\".\r\nI added some new flags (\"do_pred\" and \"model_path\"), modified the eval logic little bit to ignore labels, and wrote outputs to a file. Things are working for me. ",
"Hi LeenaShekhar\r\n\r\nWould you mind showing the code you wrote to perform predictions of a trained model? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,552 | 1,560 | 1,560 | NONE | null | I run the finetuning as instructed in the example "LM Fine-tuning"
python run_lm_finetuning.py \
--bert_model bert-base-uncased \ .
--output_dir models \
...
As a result the fine-tuned model is now in models/pytorch_model.bin
But how do I use it to classify? The example doesn't mention that.
I don't find any parameter to feed the finetuned model to be used.
I can run classifiying with only pretrained model as this:
export GLUE_DIR=~/git/GLUE/glue_data/
python run_classifier.py \
--task_name SST-2 \
--do_train \
--do_eval \
--do_lower_case \
--data_dir ~/git/x/data/input-sst2/ \
--bert_model bert-base-uncased \
--max_seq_length 128 \
--train_batch_size 16 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir out/
The instruction on google-bert says
"Once you have trained your classifier you can use it in inference mode by using the --do_predict=true command."
If I try that, it gives:
"run_classifier.py: error: unrecognized arguments: --do_predict=true"
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/375/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/374 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/374/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/374/comments | https://api.github.com/repos/huggingface/transformers/issues/374/events | https://github.com/huggingface/transformers/pull/374 | 420,484,831 | MDExOlB1bGxSZXF1ZXN0MjYwNzU0MzU4 | 374 | handle ImportError exception when used from projects outside | {
"login": "deepbrain",
"id": 10003025,
"node_id": "MDQ6VXNlcjEwMDAzMDI1",
"avatar_url": "https://avatars.githubusercontent.com/u/10003025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deepbrain",
"html_url": "https://github.com/deepbrain",
"followers_url": "https://api.github.com/users/deepbrain/followers",
"following_url": "https://api.github.com/users/deepbrain/following{/other_user}",
"gists_url": "https://api.github.com/users/deepbrain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deepbrain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deepbrain/subscriptions",
"organizations_url": "https://api.github.com/users/deepbrain/orgs",
"repos_url": "https://api.github.com/users/deepbrain/repos",
"events_url": "https://api.github.com/users/deepbrain/events{/privacy}",
"received_events_url": "https://api.github.com/users/deepbrain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Just doing `from pytorch_pretrained_bert import BertTokenizer, BertModel` in your project doesn't work?\r\n\r\nThat's what they do in [AllenNLP](https://github.com/allenai/allennlp/blob/3f0953d19de3676ea82e642659fc96d90690e34d/allennlp/modules/token_embedders/bert_token_embedder.py#L14) or [flair](https://github.com/zalandoresearch/flair/blob/797c958d0e8c256531f2cea37508e7becb2026cb/flair/embeddings.py#L14)",
"it does work, however, I don't need the higher level classes for my model, so I am sub classing the OpenAIGPTPreTrainedModel and Block and they are not included into the __init__.py. Including them into the __init__.py also solves the issue with the importException:\r\n\r\nfrom .modeling_openai import (OpenAIGPTConfig, OpenAIGPTModel,\r\nOpenAIGPTLMHeadModel, OpenAIGPTDoubleHeadsModel, OpenAIGPTPreTrainedModel, Block, load_tf_weights_in_openai_gpt)\r\n",
"I see. I think we can fix this the same way I did in the `bert` case by adding `from __future__ import absolute_import`. If you don't mind I'll do a quick PR on that."
] | 1,552 | 1,552 | 1,552 | NONE | null | The relative path that starts with . does not work when a file is used from an outside project. I added a safe code to handle the ImportError exception in this case, so I can use the source file without having to make local changes to it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/374/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/374",
"html_url": "https://github.com/huggingface/transformers/pull/374",
"diff_url": "https://github.com/huggingface/transformers/pull/374.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/374.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/373 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/373/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/373/comments | https://api.github.com/repos/huggingface/transformers/issues/373/events | https://github.com/huggingface/transformers/issues/373 | 420,367,615 | MDU6SXNzdWU0MjAzNjc2MTU= | 373 | performance degraded when using paddings between queries and contexts. | {
"login": "leonwyang",
"id": 32276166,
"node_id": "MDQ6VXNlcjMyMjc2MTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/32276166?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leonwyang",
"html_url": "https://github.com/leonwyang",
"followers_url": "https://api.github.com/users/leonwyang/followers",
"following_url": "https://api.github.com/users/leonwyang/following{/other_user}",
"gists_url": "https://api.github.com/users/leonwyang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leonwyang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leonwyang/subscriptions",
"organizations_url": "https://api.github.com/users/leonwyang/orgs",
"repos_url": "https://api.github.com/users/leonwyang/repos",
"events_url": "https://api.github.com/users/leonwyang/events{/privacy}",
"received_events_url": "https://api.github.com/users/leonwyang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
}
] | closed | false | null | [] | [
"Similar problem. My token ids is something like \"[cls]qqqq[sep]0000cccccccc[sep]00000\". Have you solve it? or is there anyone met the similar problem?",
"> Similar problem. My token ids is something like \"[cls]qqqq[sep]0000cccccccc[sep]00000\". Have you solve it? or is there anyone met the similar problem?\r\n\r\nI felt like this is caused by the way how we pretrain BERT. BERT is pretrained on contiguous texts. The way we pad zeros in between broke such continuity so essentially we need to re-train the whole model. \r\n\r\nSince I was trying to separate query and context, the way I tackled this is just masking all query tokens to create the context and same for the query. ",
"> > Similar problem. My token ids is something like \"[cls]qqqq[sep]0000cccccccc[sep]00000\". Have you solve it? or is there anyone met the similar problem?\r\n> \r\n> I felt like this is caused by the way how we pretrain BERT. BERT is pretrained on contiguous texts. The way we pad zeros in between broke such continuity so essentially we need to re-train the whole model.\r\n> \r\n> Since I was trying to separate query and context, the way I tackled this is just masking all query tokens to create the context and same for the query.\r\n\r\nIt could be the same way as inputing the query and context separately if we mask the query or context part. Anyway, I will try it. Thanks."
] | 1,552 | 1,554 | 1,554 | NONE | null | I just want to ask this here and see whether other people encountered the same situation.
I am doing modifications on the run_squad.py example.
So for the original training feature, the input ids are [cls]qqqqq[sep]cccccc000000. The attention mask is just something like 111111100000 where first k inputs were masked with 1 and 0 for the laters.
I tried to generate input ids look like [cls]qqqq[sep]0000[sep]cccccccc00000, the I can have fixed length of query and context with proper padding. Also, I changed the attention mask accordingly, namely, to have something like 11110001111100000.
However, when I trained the model on this new feature, the score degraded from 76 to 44 for the bertforquestionanswering model. I am wondering if there is any catastrophic results by doing this kind of masking? Did anyone experienced with similar situations? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/373/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/372 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/372/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/372/comments | https://api.github.com/repos/huggingface/transformers/issues/372/events | https://github.com/huggingface/transformers/issues/372 | 420,279,829 | MDU6SXNzdWU0MjAyNzk4Mjk= | 372 | a single sentence classification task, should the max length of sentence limited to half of 512, that is to say 256 | {
"login": "alphanlp",
"id": 12368732,
"node_id": "MDQ6VXNlcjEyMzY4NzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/12368732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alphanlp",
"html_url": "https://github.com/alphanlp",
"followers_url": "https://api.github.com/users/alphanlp/followers",
"following_url": "https://api.github.com/users/alphanlp/following{/other_user}",
"gists_url": "https://api.github.com/users/alphanlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alphanlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alphanlp/subscriptions",
"organizations_url": "https://api.github.com/users/alphanlp/orgs",
"repos_url": "https://api.github.com/users/alphanlp/repos",
"events_url": "https://api.github.com/users/alphanlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/alphanlp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Why should it be limited to half of 512?",
"> Why should it be limited to half of 512?\r\n\r\ncause when do train, we have sentence embedding 0 and 1, but in a single sentence classification task ,we just embedding 0, if this get bad influence",
"You can just set the whole sequence to sentence 0. Create a DataProcessor class for your task and set the whole input sequence to `text_a`, example:\r\n\r\n```\r\nclass MyProcessor(DataProcessor):\r\n# some other methods here\r\n\r\n def _create_examples(self, lines, set_type):\r\n \"\"\"Creates examples for the training and dev sets.\"\"\"\r\n examples = []\r\n for (i, line) in enumerate(lines):\r\n guid = \"%s-%s\" % (set_type, i)\r\n text_a = line[1]\r\n label = line[0]\r\n examples.append(\r\n InputExample(guid=guid, text_a=text_a, text_b=None, label=label))\r\n return examples\r\n``` \r\nNotice `text_b=None`.",
"How should I do if I have not only a sentence, but a whole text? \r\nI don't clearly understand how to extend `BertForSequenceClassification` with my own dataset for training/evaluating.\r\nI have a dataset consisting of text/label pairs, where text can have multiple sentences. ",
"Just send in the whole text as one \"sentence\", the limit on a sequence length that can be sent at once to BERT is 512 tokens",
"Ok, thanks.\r\nOne more question related to classification. BERT tokenizes my sentences pretty strange:\r\n> 04/27/2019 16:08:32 - INFO - __main__ - tokens: [CLS] @ bra ##yy ##yy ##ant Так акт ##иви ##ровала ##сь новая карта , ст ##ара ##я и была не ##ак ##тив ##на . [SEP]\r\n\r\nWhy are more than a half of the words a separated with `#`? I mean, these words are on russian and many of them are splited to several parts with `#`, though it is one word. Should this be fixed during training?",
"That's the WordPiece tokenization, its a way to match subwords when an out of vocabulary word is encountered. It's explained in the bert paper with references. It's as it should be. ",
"Ok, thank you so much.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,552 | 1,562 | 1,562 | NONE | null | hi, if i have a single sentence classification task, should the max length of sentence limited to half of 512, that is to say 256? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/372/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/371 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/371/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/371/comments | https://api.github.com/repos/huggingface/transformers/issues/371/events | https://github.com/huggingface/transformers/pull/371 | 420,279,516 | MDExOlB1bGxSZXF1ZXN0MjYwNTk1MzQx | 371 | Simplify code, delete redundancy line | {
"login": "yongbowin",
"id": 20198500,
"node_id": "MDQ6VXNlcjIwMTk4NTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/20198500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yongbowin",
"html_url": "https://github.com/yongbowin",
"followers_url": "https://api.github.com/users/yongbowin/followers",
"following_url": "https://api.github.com/users/yongbowin/following{/other_user}",
"gists_url": "https://api.github.com/users/yongbowin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yongbowin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yongbowin/subscriptions",
"organizations_url": "https://api.github.com/users/yongbowin/orgs",
"repos_url": "https://api.github.com/users/yongbowin/repos",
"events_url": "https://api.github.com/users/yongbowin/events{/privacy}",
"received_events_url": "https://api.github.com/users/yongbowin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"wouldn't this cause some kind of indentation error? (I don't have time to test the change sorry)"
] | 1,552 | 1,552 | 1,552 | CONTRIBUTOR | null | delete redundancy line 597 `if args.train` which is the same function to line 547, in order to simplify code. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/371/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/371",
"html_url": "https://github.com/huggingface/transformers/pull/371",
"diff_url": "https://github.com/huggingface/transformers/pull/371.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/371.patch",
"merged_at": 1552550613000
} |
https://api.github.com/repos/huggingface/transformers/issues/370 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/370/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/370/comments | https://api.github.com/repos/huggingface/transformers/issues/370/events | https://github.com/huggingface/transformers/issues/370 | 420,195,472 | MDU6SXNzdWU0MjAxOTU0NzI= | 370 | What is Synthetic Self-Training? | {
"login": "hsm207",
"id": 2398765,
"node_id": "MDQ6VXNlcjIzOTg3NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2398765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hsm207",
"html_url": "https://github.com/hsm207",
"followers_url": "https://api.github.com/users/hsm207/followers",
"following_url": "https://api.github.com/users/hsm207/following{/other_user}",
"gists_url": "https://api.github.com/users/hsm207/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hsm207/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hsm207/subscriptions",
"organizations_url": "https://api.github.com/users/hsm207/orgs",
"repos_url": "https://api.github.com/users/hsm207/repos",
"events_url": "https://api.github.com/users/hsm207/events{/privacy}",
"received_events_url": "https://api.github.com/users/hsm207/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Check Jacob Devlin slides starting from slide 26 [here](https://nlp.stanford.edu/seminar/details/jdevlin.pdf?fbclid=IwAR2TBFCJOeZ9cGhxB-z5cJJ17vHN4W25oWsjI8NqJoTEmlYIYEKG7oh4tlY)",
"@thomwolf thanks, the slides were helpful. Do you know if there is a recording of the talk publicly available somewhere?",
"I don't know!",
"It was a very crowded room and these talks are generally not recorded, sorry…",
"Is the synthetic self training module in this code?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Same question, will the synthetic self training module be in this code?",
"Presently not in the library and there are no short-term plans to add synthetic self-training to the library."
] | 1,552 | 1,563 | 1,559 | NONE | null | The current best performing model on[ SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) is BERT + N-Gram Masking + Synthetic Self-Training (ensemble):

What is Synthetic Self-Training?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/370/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/369 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/369/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/369/comments | https://api.github.com/repos/huggingface/transformers/issues/369/events | https://github.com/huggingface/transformers/issues/369 | 420,149,402 | MDU6SXNzdWU0MjAxNDk0MDI= | 369 | BertForQuestionAnswering: How to split output between query hidden state and context hidden state | {
"login": "julietokwara",
"id": 26914396,
"node_id": "MDQ6VXNlcjI2OTE0Mzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/26914396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julietokwara",
"html_url": "https://github.com/julietokwara",
"followers_url": "https://api.github.com/users/julietokwara/followers",
"following_url": "https://api.github.com/users/julietokwara/following{/other_user}",
"gists_url": "https://api.github.com/users/julietokwara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julietokwara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julietokwara/subscriptions",
"organizations_url": "https://api.github.com/users/julietokwara/orgs",
"repos_url": "https://api.github.com/users/julietokwara/repos",
"events_url": "https://api.github.com/users/julietokwara/events{/privacy}",
"received_events_url": "https://api.github.com/users/julietokwara/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"If you want only the context you can find the index from the segment vector by finding the last first 1 in the vector and splitting the query_context on that index. Then the context will be everything after the 1 index and everything before will be the question.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,552 | 1,560 | 1,560 | NONE | null | I've made several attempts to, but all seem to fail. Do you have a good way to do this? Right now, passing what i thought to just be the context hidden state to the final output layer in run_squad.py drops my scores (F1) by 10 points. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/369/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/368 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/368/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/368/comments | https://api.github.com/repos/huggingface/transformers/issues/368/events | https://github.com/huggingface/transformers/issues/368 | 419,879,930 | MDU6SXNzdWU0MTk4Nzk5MzA= | 368 | When i fine tune the BERT on my serve, it always says Segmentation fault? | {
"login": "bianximo",
"id": 13324167,
"node_id": "MDQ6VXNlcjEzMzI0MTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/13324167?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bianximo",
"html_url": "https://github.com/bianximo",
"followers_url": "https://api.github.com/users/bianximo/followers",
"following_url": "https://api.github.com/users/bianximo/following{/other_user}",
"gists_url": "https://api.github.com/users/bianximo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bianximo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bianximo/subscriptions",
"organizations_url": "https://api.github.com/users/bianximo/orgs",
"repos_url": "https://api.github.com/users/bianximo/repos",
"events_url": "https://api.github.com/users/bianximo/events{/privacy}",
"received_events_url": "https://api.github.com/users/bianximo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I had the same issue. I'm getting seg fault on an aws deep learning ami with a tesla v100 gpu instance. I get the error with or with out fp16",
"I also have the same problem. Did you guys figure out any solution? \r\nI am able to load the data, however at the first epoch 0, I see the error segmentation fault. ",
"Can you try running the model:\r\n- with a very small batch size to check if it's an OOM error\r\n- with `CUDA_LAUNCH_BLOCKING=1` to see the exact line causing the error\r\n?",
"setting the env variable CUDA_LAUNCH_BLOCKING=1 doesn't give me any additional error messages. I still see the same error. \r\n \r\nTried it with batch size 1 and I get the same issue",
"I got the code to run on my school server. I updated the gcc using this `conda install -c psi4 gcc-5`. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I am also experiencing this issue. Anyone else figure it out?"
] | 1,552 | 1,590 | 1,563 | NONE | null | I have done some modification on BertForSequenceClassification to apply a mutilabel prediction task, but
when i run my code on my serve, it always says that Segmentation fault when it runs to
"loss = model(input_ids, segment_ids, input_mask,label_ids)", and even if i don't use GPU and fp16, it comes the same fault.
but if i run the code on my PC ,it works well,just very slow....
what 's wrong?? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/368/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/367 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/367/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/367/comments | https://api.github.com/repos/huggingface/transformers/issues/367/events | https://github.com/huggingface/transformers/issues/367 | 419,841,545 | MDU6SXNzdWU0MTk4NDE1NDU= | 367 | how does the run_squad.py deal with non-answerable questions | {
"login": "elephantomkk",
"id": 48470292,
"node_id": "MDQ6VXNlcjQ4NDcwMjky",
"avatar_url": "https://avatars.githubusercontent.com/u/48470292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elephantomkk",
"html_url": "https://github.com/elephantomkk",
"followers_url": "https://api.github.com/users/elephantomkk/followers",
"following_url": "https://api.github.com/users/elephantomkk/following{/other_user}",
"gists_url": "https://api.github.com/users/elephantomkk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elephantomkk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elephantomkk/subscriptions",
"organizations_url": "https://api.github.com/users/elephantomkk/orgs",
"repos_url": "https://api.github.com/users/elephantomkk/repos",
"events_url": "https://api.github.com/users/elephantomkk/events{/privacy}",
"received_events_url": "https://api.github.com/users/elephantomkk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
}
] | closed | false | null | [] | [
"@Liangtaiwan and @abeljim were the contributors of the `run_squad.py` example.\r\nMaybe they can help.",
"Hi @elephantomkk,\r\n\r\nThe non-answerable solving is using [CLS] token as the ground truth.\r\nAs a result, the start login = and end logit = -1.\r\n\r\nYou can find it out in this repo code or official Bert code.\r\nAlso, the method is mentioned in Jacob Devlin's slide.\r\nhttps://nlp.stanford.edu/seminar/details/jdevlin.pdf?fbclid=IwAR2TBFCJOeZ9cGhxB-z5cJJ17vHN4W25oWsjI8NqJoTEmlYIYEKG7oh4tlY",
"Thanks for the reply! I tested that and found the performance on the non-answerable is not so good compared with the answerables:("
] | 1,552 | 1,567 | 1,552 | NONE | null | Hi,
I want to do a similar reading comprehension task with non-answerable questions but i didn't figure out how you deal with it from the codes. Did you add additional token on this? Or only outputs the no-answer when the start logit = end logit = -1? Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/367/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/366 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/366/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/366/comments | https://api.github.com/repos/huggingface/transformers/issues/366/events | https://github.com/huggingface/transformers/issues/366 | 419,689,340 | MDU6SXNzdWU0MTk2ODkzNDA= | 366 | Vocabularly file not available for Squad predictions | {
"login": "beyondbeneath",
"id": 25541848,
"node_id": "MDQ6VXNlcjI1NTQxODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/25541848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/beyondbeneath",
"html_url": "https://github.com/beyondbeneath",
"followers_url": "https://api.github.com/users/beyondbeneath/followers",
"following_url": "https://api.github.com/users/beyondbeneath/following{/other_user}",
"gists_url": "https://api.github.com/users/beyondbeneath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/beyondbeneath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beyondbeneath/subscriptions",
"organizations_url": "https://api.github.com/users/beyondbeneath/orgs",
"repos_url": "https://api.github.com/users/beyondbeneath/repos",
"events_url": "https://api.github.com/users/beyondbeneath/events{/privacy}",
"received_events_url": "https://api.github.com/users/beyondbeneath/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Indeed, this example could be improved. I would happy to welcome a PR on that.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,552 | 1,558 | 1,558 | NONE | null | There appears to be a bug in the way the vocabulary file is handled.
For example, if we execute [`run_squad.py`](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py) with `--do_train`, and set the `--output_dir` to `/tmp/debug_squad/`, we successfully build a model and the resulting model files (`bert_config.json` and `pytorch_model.bin`) are saved in the appropriate directory.
Then we execute [`run_squad.py`](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py) with `--do_predict`, and this time set `--bert_model` to `/tmp/debug_squad`, which according to [`modelling.py`](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py) is all that is required (see the [`from_pretrained`](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L532) method).
However, this raises a bug as the tokenizer cannot load a vocabulary file. If we inspect [`tokenizer.py`](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization.py#L138) we can see what happens: if you are using a pre-trained BERT model, it will look for the vocab file at a specific URL. If you input a directory as your `bert_model`, it assumes you have a file `vocab.txt` (`VOCAB_NAME`) in that same directory. It also appears to check the cache which may or may not be present.
We were able to fix this by simply downloading the appropriate vocab file for our base BERT model (e.g., `'bert-base-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt"`), renaming it to `vocab.txt`, and placing it in `/tmp/debug_squad`, however it feels as though this should be better handled by the train/predict pipeline. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/366/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/366/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/365 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/365/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/365/comments | https://api.github.com/repos/huggingface/transformers/issues/365/events | https://github.com/huggingface/transformers/issues/365 | 419,473,116 | MDU6SXNzdWU0MTk0NzMxMTY= | 365 | BERT accuracy reduced after providing custom training..The answer is also not correct | {
"login": "shuvadibp",
"id": 37171714,
"node_id": "MDQ6VXNlcjM3MTcxNzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/37171714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuvadibp",
"html_url": "https://github.com/shuvadibp",
"followers_url": "https://api.github.com/users/shuvadibp/followers",
"following_url": "https://api.github.com/users/shuvadibp/following{/other_user}",
"gists_url": "https://api.github.com/users/shuvadibp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuvadibp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuvadibp/subscriptions",
"organizations_url": "https://api.github.com/users/shuvadibp/orgs",
"repos_url": "https://api.github.com/users/shuvadibp/repos",
"events_url": "https://api.github.com/users/shuvadibp/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuvadibp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Can you give a simple self-contained script to reproduce your issue?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,552 | 1,558 | 1,558 | NONE | null | I have trained Google BERT with a custom training.
I have included the exact question and answer along with the context from the input document in the training file and trained BERT.
With new generated checkpoints (ckpt) I am still getting the same wrong answer as obtained before training. However it is observed the probability returned is reduced this time, in nbest_predictions.json. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/365/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/364 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/364/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/364/comments | https://api.github.com/repos/huggingface/transformers/issues/364/events | https://github.com/huggingface/transformers/issues/364 | 419,292,310 | MDU6SXNzdWU0MTkyOTIzMTA= | 364 | Potential redundancy in run_classifier.py example script | {
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I agree with you, this part of the API could be improved.\r\n\r\nThe BERT model is now used in several third-party libraries like AllenNLP and FLAIR so we have to be careful not to make any breaking change on this model.\r\n\r\nWe could add a flag to get full output maybe.",
"Mind if I submit a PR adding a flag that allows getting the full output in the case of `if labels is not None`?",
"I'm ok to welcome a PR on that but there is one issue with flags in `forward()` call and multi-GPU you may or may not be aware of and we need to think about:\r\nAll the inputs to the `forward()` call are split across GPUs so having a non-batched input like a flag break DataParallel for multi-gpu.\r\nSo maybe we need to add a general flag in the models which can be set. Maybe you can try to draft a PR and we check it behave well on the examples then.\r\nMaybe you can also have a deep look at the way inputs are split in DataParallel and check whether a solution with flag in the arguments of the `forward()` call work.",
"Hmm, okay, I was not aware of that. I think adding it as a property on the model itself is arguably an unnecessary over-complication for a simple matter. Returning a tuple value is much cleaner and the user can always discard the information that they don't care. Although I do understand the concern of backward compatibility. So I won't do this PR and will let you guys make the decision between the two options. Personally I lean more towards the API-breaking option.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,552 | 1,558 | 1,558 | CONTRIBUTOR | null | https://github.com/huggingface/pytorch-pretrained-BERT/blob/7cc35c31040d8bdfcadc274c087d6a73c2036210/examples/run_classifier.py#L641-L642
Here we are calling the model twice. I understand that the model returns different things depending on the presence of `label_ids`, but this could actually be quite expensive. I think we can change the `if label is not None` branch in the model code below to return two items instead, but I'm not sure if it will break things elsewhere.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/7cc35c31040d8bdfcadc274c087d6a73c2036210/pytorch_pretrained_bert/modeling.py#L969-L979
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/364/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/363 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/363/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/363/comments | https://api.github.com/repos/huggingface/transformers/issues/363/events | https://github.com/huggingface/transformers/issues/363 | 419,139,787 | MDU6SXNzdWU0MTkxMzk3ODc= | 363 | Separator token for custom QA input (multi paragraph, longer than 512) | {
"login": "bugtig",
"id": 28372188,
"node_id": "MDQ6VXNlcjI4MzcyMTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/28372188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bugtig",
"html_url": "https://github.com/bugtig",
"followers_url": "https://api.github.com/users/bugtig/followers",
"following_url": "https://api.github.com/users/bugtig/following{/other_user}",
"gists_url": "https://api.github.com/users/bugtig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bugtig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bugtig/subscriptions",
"organizations_url": "https://api.github.com/users/bugtig/orgs",
"repos_url": "https://api.github.com/users/bugtig/repos",
"events_url": "https://api.github.com/users/bugtig/events{/privacy}",
"received_events_url": "https://api.github.com/users/bugtig/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,552 | 1,558 | 1,558 | NONE | null | Hello!
I'm trying to extract features for a QA task where the document is composed of multiple disparate paragraphs. So my input is:
question ||| document
where document is {para1 SEP para2 SEP para3 SEP}, so overall, it's something like:
question ||| para1 SEP para2 SEP para3 SEP
My question is: Is it okay to use the default BERT [SEP] token for the paragraph separation token as above? Or should I use something like the NULL token instead, or simply remove the paragraph separation token completely?
Secondly, my input is longer than 512, so I'm thinking of doing sliding windows like:
question ||| doc[:512]
question ||| doc[256:768]
and so on, finally merging the overlaps by averaging. Would this be correct?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/363/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/362 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/362/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/362/comments | https://api.github.com/repos/huggingface/transformers/issues/362/events | https://github.com/huggingface/transformers/pull/362 | 419,078,044 | MDExOlB1bGxSZXF1ZXN0MjU5NjkyOTc3 | 362 | Make the hyperlink of NVIDIA Apex clickable | {
"login": "bharatr21",
"id": 13381361,
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bharatr21",
"html_url": "https://github.com/bharatr21",
"followers_url": "https://api.github.com/users/bharatr21/followers",
"following_url": "https://api.github.com/users/bharatr21/following{/other_user}",
"gists_url": "https://api.github.com/users/bharatr21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bharatr21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bharatr21/subscriptions",
"organizations_url": "https://api.github.com/users/bharatr21/orgs",
"repos_url": "https://api.github.com/users/bharatr21/repos",
"events_url": "https://api.github.com/users/bharatr21/events{/privacy}",
"received_events_url": "https://api.github.com/users/bharatr21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,552 | 1,552 | 1,552 | CONTRIBUTOR | null | In the case of the ImportError in modeling.py [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/7cc35c31040d8bdfcadc274c087d6a73c2036210/pytorch_pretrained_bert/modeling.py#L219), make the hyperlink to NVIDIA Apex redirect properly by spacing the '.' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/362/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/362",
"html_url": "https://github.com/huggingface/transformers/pull/362",
"diff_url": "https://github.com/huggingface/transformers/pull/362.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/362.patch",
"merged_at": 1552291732000
} |
https://api.github.com/repos/huggingface/transformers/issues/361 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/361/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/361/comments | https://api.github.com/repos/huggingface/transformers/issues/361/events | https://github.com/huggingface/transformers/pull/361 | 419,008,830 | MDExOlB1bGxSZXF1ZXN0MjU5NjQ3NjM0 | 361 | Correct line number in README for classes | {
"login": "junjieqian",
"id": 852826,
"node_id": "MDQ6VXNlcjg1MjgyNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/852826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/junjieqian",
"html_url": "https://github.com/junjieqian",
"followers_url": "https://api.github.com/users/junjieqian/followers",
"following_url": "https://api.github.com/users/junjieqian/following{/other_user}",
"gists_url": "https://api.github.com/users/junjieqian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/junjieqian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junjieqian/subscriptions",
"organizations_url": "https://api.github.com/users/junjieqian/orgs",
"repos_url": "https://api.github.com/users/junjieqian/repos",
"events_url": "https://api.github.com/users/junjieqian/events{/privacy}",
"received_events_url": "https://api.github.com/users/junjieqian/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @junjieqian!"
] | 1,552 | 1,553 | 1,552 | CONTRIBUTOR | null | Correct the linked line number in README for classes | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/361/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/361",
"html_url": "https://github.com/huggingface/transformers/pull/361",
"diff_url": "https://github.com/huggingface/transformers/pull/361.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/361.patch",
"merged_at": 1552291708000
} |
https://api.github.com/repos/huggingface/transformers/issues/360 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/360/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/360/comments | https://api.github.com/repos/huggingface/transformers/issues/360/events | https://github.com/huggingface/transformers/issues/360 | 418,882,345 | MDU6SXNzdWU0MTg4ODIzNDU= | 360 | Ranking predictions with BertForQuestionAnswering | {
"login": "gqoew",
"id": 32342701,
"node_id": "MDQ6VXNlcjMyMzQyNzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/32342701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gqoew",
"html_url": "https://github.com/gqoew",
"followers_url": "https://api.github.com/users/gqoew/followers",
"following_url": "https://api.github.com/users/gqoew/following{/other_user}",
"gists_url": "https://api.github.com/users/gqoew/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gqoew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gqoew/subscriptions",
"organizations_url": "https://api.github.com/users/gqoew/orgs",
"repos_url": "https://api.github.com/users/gqoew/repos",
"events_url": "https://api.github.com/users/gqoew/events{/privacy}",
"received_events_url": "https://api.github.com/users/gqoew/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I am also interested in this, it looks like we would have to append the prediction probability to the `all_predictions` JSON output.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,552 | 1,558 | 1,558 | NONE | null | I am using `BertForQuestionAnswering`
I am trying to make a prediction from the same question asked on different paragraphs. It outputs an `OrderedDict` of tuples with format `(paragraphID, answer)`. How can I rank those predictions to get the most probable answer across all paragraphs?
Thanks for great repo! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/360/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/360/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/359 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/359/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/359/comments | https://api.github.com/repos/huggingface/transformers/issues/359/events | https://github.com/huggingface/transformers/pull/359 | 418,872,236 | MDExOlB1bGxSZXF1ZXN0MjU5NTQwOTU0 | 359 | Update run_gpt2.py | {
"login": "elonmuskceo",
"id": 47338871,
"node_id": "MDQ6VXNlcjQ3MzM4ODcx",
"avatar_url": "https://avatars.githubusercontent.com/u/47338871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elonmuskceo",
"html_url": "https://github.com/elonmuskceo",
"followers_url": "https://api.github.com/users/elonmuskceo/followers",
"following_url": "https://api.github.com/users/elonmuskceo/following{/other_user}",
"gists_url": "https://api.github.com/users/elonmuskceo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elonmuskceo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elonmuskceo/subscriptions",
"organizations_url": "https://api.github.com/users/elonmuskceo/orgs",
"repos_url": "https://api.github.com/users/elonmuskceo/repos",
"events_url": "https://api.github.com/users/elonmuskceo/events{/privacy}",
"received_events_url": "https://api.github.com/users/elonmuskceo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks Elon"
] | 1,552 | 1,561 | 1,552 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/359/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/359",
"html_url": "https://github.com/huggingface/transformers/pull/359",
"diff_url": "https://github.com/huggingface/transformers/pull/359.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/359.patch",
"merged_at": 1552291676000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/358 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/358/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/358/comments | https://api.github.com/repos/huggingface/transformers/issues/358/events | https://github.com/huggingface/transformers/pull/358 | 418,274,901 | MDExOlB1bGxSZXF1ZXN0MjU5MDg1NzYx | 358 | add 'padding_idx=0' for BertEmbeddings | {
"login": "haozheji",
"id": 25786613,
"node_id": "MDQ6VXNlcjI1Nzg2NjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/25786613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haozheji",
"html_url": "https://github.com/haozheji",
"followers_url": "https://api.github.com/users/haozheji/followers",
"following_url": "https://api.github.com/users/haozheji/following{/other_user}",
"gists_url": "https://api.github.com/users/haozheji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haozheji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haozheji/subscriptions",
"organizations_url": "https://api.github.com/users/haozheji/orgs",
"repos_url": "https://api.github.com/users/haozheji/repos",
"events_url": "https://api.github.com/users/haozheji/events{/privacy}",
"received_events_url": "https://api.github.com/users/haozheji/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @cdjhz "
] | 1,551 | 1,552 | 1,552 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/358/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/358",
"html_url": "https://github.com/huggingface/transformers/pull/358",
"diff_url": "https://github.com/huggingface/transformers/pull/358.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/358.patch",
"merged_at": 1552291615000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/357 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/357/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/357/comments | https://api.github.com/repos/huggingface/transformers/issues/357/events | https://github.com/huggingface/transformers/pull/357 | 418,202,612 | MDExOlB1bGxSZXF1ZXN0MjU5MDI5Mjg5 | 357 | Use Dropout Layer in OpenAIGPTMultipleChoiceHead | {
"login": "pglock",
"id": 8183619,
"node_id": "MDQ6VXNlcjgxODM2MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8183619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pglock",
"html_url": "https://github.com/pglock",
"followers_url": "https://api.github.com/users/pglock/followers",
"following_url": "https://api.github.com/users/pglock/following{/other_user}",
"gists_url": "https://api.github.com/users/pglock/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pglock/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pglock/subscriptions",
"organizations_url": "https://api.github.com/users/pglock/orgs",
"repos_url": "https://api.github.com/users/pglock/repos",
"events_url": "https://api.github.com/users/pglock/events{/privacy}",
"received_events_url": "https://api.github.com/users/pglock/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Seems good to me, thanks for that.\r\nLet me just check why we don't have Circle-CI tests on the PR anymore and I'll merge it."
] | 1,551 | 1,552 | 1,552 | CONTRIBUTOR | null | closes #354 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/357/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/357",
"html_url": "https://github.com/huggingface/transformers/pull/357",
"diff_url": "https://github.com/huggingface/transformers/pull/357.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/357.patch",
"merged_at": 1552291588000
} |
https://api.github.com/repos/huggingface/transformers/issues/356 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/356/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/356/comments | https://api.github.com/repos/huggingface/transformers/issues/356/events | https://github.com/huggingface/transformers/issues/356 | 418,022,337 | MDU6SXNzdWU0MTgwMjIzMzc= | 356 | How to add input mask to GPT? | {
"login": "jolinxql",
"id": 5552657,
"node_id": "MDQ6VXNlcjU1NTI2NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5552657?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jolinxql",
"html_url": "https://github.com/jolinxql",
"followers_url": "https://api.github.com/users/jolinxql/followers",
"following_url": "https://api.github.com/users/jolinxql/following{/other_user}",
"gists_url": "https://api.github.com/users/jolinxql/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jolinxql/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jolinxql/subscriptions",
"organizations_url": "https://api.github.com/users/jolinxql/orgs",
"repos_url": "https://api.github.com/users/jolinxql/repos",
"events_url": "https://api.github.com/users/jolinxql/events{/privacy}",
"received_events_url": "https://api.github.com/users/jolinxql/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"GPT is a causal model so each tokens only attend to the left context and masking is not really needed.\r\nJust mask the output according to your lengths (and be such that each input sample start at the very first left token)."
] | 1,551 | 1,551 | 1,551 | NONE | null | I use `attention_mask` when I do `bert.forward(input, attention_mask)`. But in GPT, when I try to pass a batch of input to `OpenAIGPTModel` to extract a batch of features, and the lengths of sentences in a batch are different, I have no idea how to do it. Or maybe it doesn't need the mask to be given? If so, is zero the padding_index?
For a quick review, this is the code for bert to extract embeddings.
``` python
all_encoder_layers, pooled_output = self.bert(inputs[:, :seq_max_len], token_type_ids=None,
attention_mask=att_mask.to(device))
embeds = torch.cat(all_encoder_layers[-self.bert_n_layers:],-1)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/356/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/355 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/355/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/355/comments | https://api.github.com/repos/huggingface/transformers/issues/355/events | https://github.com/huggingface/transformers/issues/355 | 417,981,275 | MDU6SXNzdWU0MTc5ODEyNzU= | 355 | [Question] Best choice for Sentence Compression model? | {
"login": "Hellisotherpeople",
"id": 12686966,
"node_id": "MDQ6VXNlcjEyNjg2OTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/12686966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hellisotherpeople",
"html_url": "https://github.com/Hellisotherpeople",
"followers_url": "https://api.github.com/users/Hellisotherpeople/followers",
"following_url": "https://api.github.com/users/Hellisotherpeople/following{/other_user}",
"gists_url": "https://api.github.com/users/Hellisotherpeople/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hellisotherpeople/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hellisotherpeople/subscriptions",
"organizations_url": "https://api.github.com/users/Hellisotherpeople/orgs",
"repos_url": "https://api.github.com/users/Hellisotherpeople/repos",
"events_url": "https://api.github.com/users/Hellisotherpeople/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hellisotherpeople/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,551 | 1,557 | 1,557 | NONE | null | I'm trying to develop a model that will do "word level extractive summarization" e.g. that it will delete unimportant words or tokens and summarize a document. This is also known as "Sentence Compression" in the NLP community.
I'm thinking using the BertforTokenClassification module. Will it work with a large dataset, or must it all fit in my VRAM at once?
In my case, I also have access to a human made abstractive summary of each document - I was wondering if I could do contextual Sentence Compression by having a model do sequence to sequence conversion between the Abstractive Summary and the original document, and extract all words with the highest attention scores. Anyone know if this is a good idea or not? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/355/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/355/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/354 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/354/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/354/comments | https://api.github.com/repos/huggingface/transformers/issues/354/events | https://github.com/huggingface/transformers/issues/354 | 417,878,794 | MDU6SXNzdWU0MTc4Nzg3OTQ= | 354 | Dropout Layer in OpenAIGPTMultipleChoiceHead not used | {
"login": "pglock",
"id": 8183619,
"node_id": "MDQ6VXNlcjgxODM2MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8183619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pglock",
"html_url": "https://github.com/pglock",
"followers_url": "https://api.github.com/users/pglock/followers",
"following_url": "https://api.github.com/users/pglock/following{/other_user}",
"gists_url": "https://api.github.com/users/pglock/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pglock/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pglock/subscriptions",
"organizations_url": "https://api.github.com/users/pglock/orgs",
"repos_url": "https://api.github.com/users/pglock/repos",
"events_url": "https://api.github.com/users/pglock/events{/privacy}",
"received_events_url": "https://api.github.com/users/pglock/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,551 | 1,552 | 1,552 | CONTRIBUTOR | null | [OpenAIGPTMultipleChoiceHead](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling_openai.py#L363) defines an additional dropout layer, which is not used in `forward`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/354/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/353 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/353/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/353/comments | https://api.github.com/repos/huggingface/transformers/issues/353/events | https://github.com/huggingface/transformers/issues/353 | 417,829,109 | MDU6SXNzdWU0MTc4MjkxMDk= | 353 | can't load the model | {
"login": "countback",
"id": 24824302,
"node_id": "MDQ6VXNlcjI0ODI0MzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/24824302?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/countback",
"html_url": "https://github.com/countback",
"followers_url": "https://api.github.com/users/countback/followers",
"following_url": "https://api.github.com/users/countback/following{/other_user}",
"gists_url": "https://api.github.com/users/countback/gists{/gist_id}",
"starred_url": "https://api.github.com/users/countback/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/countback/subscriptions",
"organizations_url": "https://api.github.com/users/countback/orgs",
"repos_url": "https://api.github.com/users/countback/repos",
"events_url": "https://api.github.com/users/countback/events{/privacy}",
"received_events_url": "https://api.github.com/users/countback/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
}
] | closed | false | null | [] | [
"Strange error.\r\n\r\nCan you try:\r\n```python\r\nimport pytorch_pretrained_bert as ppb\r\nassert 'bert-large-cased' in ppb.modeling.PRETRAINED_MODEL_ARCHIVE_MAP\r\n```\r\nDo you have an open internet connection on the server that run the script?",
"@thomwolf Is there a way to point to a model on disk? This question seems related enough to daisychain with this issue. :-)",
"I noticed that this error happens when you exceed the disk space in the temporary directory while downloading BERT.",
"I ran into the same problem. When I used the Chinese pre-training model, it was sometimes good and sometimes bad.",
"@thomwolf I've been having the same error, and I received an AssertionError when I try \r\n\r\nassert 'bert-based-uncased' in bert.modeling.PRETRAINED_MODEL_ARCHIVE_MAP\r\n\r\nI've tried using both conda install and Pip install to get the package but in both cases I am not able to load any models",
"Hi @DuncanCam-Stein,\r\nWhich version of python do you have?\r\nCan you try to install from source?",
"@thomwolf @countback \r\nI finally fixed the problem by downloading the tf checkpoints directly from [here](https://github.com/google-research/bert), and then using the '[convert_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py)' function to create a `pytorch_model.bin` file.\r\nI then used the path to pytorch_model.bin and bert_config.json file in BertModel.from_pretrained('path/to/bin/and/json') instead of 'bert-base-uncased'.\r\n👍 \r\nHelpful info was found [here](https://devhub.io/repos/huggingface-pytorch-pretrained-BERT).",
"The network connection check has been relaxed in the now merged #500.\r\nSerialization of the model have also been simplified a lot with #489.\r\n\r\nThese improvements will be included in the next PyPI release (probably next week).\r\n\r\nIn the meantime you can install from `master` and already use the serialization best-practices described in the README [here](https://github.com/huggingface/pytorch-pretrained-BERT#serialization-best-practices)",
"As @martiansideofthemoon said, I met this error because I didn't have enough space on disk.\r\n\r\nCheck if you can download the file with :\r\n\r\n`wget https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz`",
"@martiansideofthemoon What does that mean if we can download it via wget but not when we use from_pretrained? is it a disk space problem?\r\n",
"@Hannabrahman \r\nIf you can download it via wget, it means you have enough disk space, so the issue is from somewhere else.",
"@Colanim Thanks. I figured out it was memory issue on the cache directory. ",
"@Hannabrahman \r\n\r\n> @Colanim Thanks. I figured out it was memory issue on the cache directory.\r\n\r\nhow did you solve this issue?",
"@raj5287 \r\nFree some disk space on the cache directory or specify another cache directory ",
"@Colanim i have enough disk space since i have downloaded the file using \r\n`wget https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz` but i am not sure how to specify another cache directory or use the downloaded file (i am new to pytorch and ubuntu :| )",
"> @thomwolf @countback\r\n> I finally fixed the problem by downloading the tf checkpoints directly from [here](https://github.com/google-research/bert), and then using the '[convert_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py)' function to create a `pytorch_model.bin` file.\r\n> I then used the path to pytorch_model.bin and bert_config.json file in BertModel.from_pretrained('path/to/bin/and/json') instead of 'bert-base-uncased'.\r\n> +1\r\n> Helpful info was found [here](https://devhub.io/repos/huggingface-pytorch-pretrained-BERT).\r\n\r\n@DuncanCam-Stein i have downloaded and placed _pytorch_model.bin_ and _bert_config.json_ in _bert_tagger_ folder but when i am doing `tokenizer = BertModel.from_pretrained('home/user/Download/bert_pos_tagger/bert_tagger/')` i am still getting the error : `Model name 'home/user/Downloads/bert_pos_tagger/bert_tagger/' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'home/user/Downloads/bert_pos_tagger/bert_tagger/' was a path or url but couldn't find any file associated to this path or url.`",
"try to delete cahe file and rerun the command",
"I noticed that the error appears when I execute my script in debug mode (in Visual Studio Code). I fixed it by executing the script over the terminal `python myscriptname.py` once. Afterwards Debug mode works fine. \r\n\r\nBtw. I got the same problem with the tokenizer and this also fixed it.",
"> > > > model = BertModel.from_pretrained('bert-large-cased')\r\n> > > > Model name 'bert-large-cased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased.tar.gz' was a path or url but couldn't find any file associated to this path or url.\r\n\r\nhello,I meet the problem when run the torch bert code 👍 \r\n\r\nOSError: Can't load weights for 'bert-base-uncased'. Make sure that:\r\n\r\n- 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 'bert-base-uncased' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.\r\nif I can download the bert-base-uncased weight, where I should put the file in ? hope your reply~",
"@DTW1004 check your network connection. This happens when I'm behind a proxy and SSL/proxy isn't configured appropriately.",
"bro,I've been having the same error. and then I try to debug the specific code\r\nBertTokenizer.from_pretrained(MODEL_NAME)\r\nstep in to the origin code, I find that I could step every line of the transformer in the debug mode. when step out origin code, the tokenizer tool could be used. what‘s more , the code could be run normaly in the next time I run the code. ",
"I met the issue and I found the reason is that my server connecting was offline.",
"Running into the same issue on AWS Lambda. Neither relative and absolute paths will allow the model to load from pre-trained. ",
"Here's what I am doing:\r\n\r\n```shell\r\n!wget -q https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased.tar.gz\r\n!tar xf bert-base-multilingual-cased.tar.gz\r\n```\r\n\r\nNow, if I do:\r\n\r\n```python\r\nencoder = TFBertModel.from_pretrained(\"bert-base-multilingual-cased\")\r\n```\r\n\r\nI still get:\r\n\r\n```shell\r\nOSError: Can't load config for 'bert-base-multilingual-cased'. Make sure that:\r\n\r\n- 'bert-base-multilingual-cased' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 'bert-base-multilingual-cased' is the correct path to a directory containing a config.json file\r\n```",
"Here's what I am doing:\r\n\r\nfrom transformers import pipeline\r\n\r\ndef corret_sentence(sentence,unmasker):\r\n res = unmasker(sentence)\r\n return res \r\n\r\nif __name__=='__main__':\r\n sentence = \"关小彤\"\r\n new_sentence = \"\"\r\n unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-2_H-512') \r\n for idx,ch in enumerate(sentence): \r\n new_sentence = sentence[:idx] + \"[MASK]\" + sentence[idx+1:]\r\n print(corret_sentence(new_sentence,unmasker))\r\n\r\nI get:\r\n\r\nValueError: Could not load model uer/chinese_roberta_L-2_H-512 with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForMaskedLM'>, <class 'transformers.models.bert.modeling_bert.BertForMaskedLM'>).",
"> Here's what I am doing:\r\n> \r\n> from transformers import pipeline\r\n> \r\n> def corret_sentence(sentence,unmasker): res = unmasker(sentence) return res\r\n> \r\n> if **name**=='**main**': sentence = \"关小彤\" new_sentence = \"\" unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-2_H-512') for idx,ch in enumerate(sentence): new_sentence = sentence[:idx] + \"[MASK]\" + sentence[idx+1:] print(corret_sentence(new_sentence,unmasker))\r\n> \r\n> I get:\r\n> \r\n> ValueError: Could not load model uer/chinese_roberta_L-2_H-512 with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForMaskedLM'>, <class 'transformers.models.bert.modeling_bert.BertForMaskedLM'>).\r\n\r\nhow could solve this?\r\n\r\n\r\ndid you solve this problem?\r\n\r\ni am also having sample",
"> Free some disk space\r\n\r\nhow can I free some disk space.\r\nwhich shell command should i use?",
"> @thomwolf @countback I finally fixed the problem by downloading the tf checkpoints directly from [here](https://github.com/google-research/bert), and then using the '[convert_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py)' function to create a `pytorch_model.bin` file. I then used the path to pytorch_model.bin and bert_config.json file in BertModel.from_pretrained('path/to/bin/and/json') instead of 'bert-base-uncased'. 👍 Helpful info was found [here](https://devhub.io/repos/huggingface-pytorch-pretrained-BERT).\r\n\r\nCan you please specify which model exactly you downloaded and how you ran the function? Thanks"
] | 1,551 | 1,656 | 1,555 | NONE | null | >>> model = BertModel.from_pretrained('bert-large-cased')
Model name 'bert-large-cased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased.tar.gz' was a path or url but couldn't find any file associated to this path or url. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/353/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/353/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/352 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/352/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/352/comments | https://api.github.com/repos/huggingface/transformers/issues/352/events | https://github.com/huggingface/transformers/issues/352 | 417,772,856 | MDU6SXNzdWU0MTc3NzI4NTY= | 352 | How to incrementally do fine tune train | {
"login": "shuvadibp",
"id": 37171714,
"node_id": "MDQ6VXNlcjM3MTcxNzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/37171714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuvadibp",
"html_url": "https://github.com/shuvadibp",
"followers_url": "https://api.github.com/users/shuvadibp/followers",
"following_url": "https://api.github.com/users/shuvadibp/following{/other_user}",
"gists_url": "https://api.github.com/users/shuvadibp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuvadibp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuvadibp/subscriptions",
"organizations_url": "https://api.github.com/users/shuvadibp/orgs",
"repos_url": "https://api.github.com/users/shuvadibp/repos",
"events_url": "https://api.github.com/users/shuvadibp/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuvadibp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I think my answer here can help you: https://github.com/huggingface/pytorch-pretrained-BERT/issues/332",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hey @shuvadibp, did you figure out a way of doing it? I would like to talk to you about the same.."
] | 1,551 | 1,560 | 1,558 | NONE | null | I am using Bert for Question Answering. After fine tuning with Squad data set, I want to further train new questions of my own domain.
Please suggest how can I use newly generated pytorch_model.bin file and then increment it with my own training weights to get my own pytorch_suqad_plus_my_model.bin ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/352/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/351 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/351/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/351/comments | https://api.github.com/repos/huggingface/transformers/issues/351/events | https://github.com/huggingface/transformers/issues/351 | 417,721,684 | MDU6SXNzdWU0MTc3MjE2ODQ= | 351 | Little training has no impact | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,551 | 1,558 | 1,558 | NONE | null | When tried to enter few training data in trainxx.json (few questions and few answers) and ran the training, then new pytorch_model.bin file got generated ( = uncased + squad training + few my questions).
However, when same question was put in devxx.json the answer is not same which was put in training.
Why is there no positive impact of training? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/351/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/350 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/350/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/350/comments | https://api.github.com/repos/huggingface/transformers/issues/350/events | https://github.com/huggingface/transformers/issues/350 | 417,703,868 | MDU6SXNzdWU0MTc3MDM4Njg= | 350 | Bert Uncased Large giving very low results with SQUAD v1.1 dataset | {
"login": "sarthak221995",
"id": 11936036,
"node_id": "MDQ6VXNlcjExOTM2MDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/11936036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sarthak221995",
"html_url": "https://github.com/sarthak221995",
"followers_url": "https://api.github.com/users/sarthak221995/followers",
"following_url": "https://api.github.com/users/sarthak221995/following{/other_user}",
"gists_url": "https://api.github.com/users/sarthak221995/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sarthak221995/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarthak221995/subscriptions",
"organizations_url": "https://api.github.com/users/sarthak221995/orgs",
"repos_url": "https://api.github.com/users/sarthak221995/repos",
"events_url": "https://api.github.com/users/sarthak221995/events{/privacy}",
"received_events_url": "https://api.github.com/users/sarthak221995/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,551 | 1,552 | 1,552 | NONE | null | **Configuration:**
- do_lower_case=True
- max_answer_length=30
- max_answer_length=30
- n_best_size=20
- verbose_logging=False
- bert_model="bert-large-uncased"
- max_seq_length=384
- doc_stride=128
- max_query_length=192
- local_rank=-1
- train_batch_size=12
- predict_batch_size=12
- num_train_epochs=2.0
- gradient_accumulation_steps=1
- fp16=True
- warmup_proportion=0.1
- learning_rate=3e-5
**Results:**
1. Getting very bad results with the given configuration.
2. Kindly let me know if there is an issue with the configuration.
3. Getting many repeated terms for different questions under the same context
[dev1.1_squad_best_results.txt](https://github.com/huggingface/pytorch-pretrained-BERT/files/2935422/dev1.1_squad_best_results.txt)
Sample Dev1.1 Squad(First 9 Questions from development dataset):
"56be4db0acb8001400a502ec": "Levi's Stadium in the San Francisco Bay Area at Santa Clara,",
"56be4db0acb8001400a502ed": "Levi's Stadium in the San Francisco Bay Area at Santa Clara,",
"56be4db0acb8001400a502ee": "Levi's",
"56be4db0acb8001400a502ef": "Levi's",
"56be4db0acb8001400a502f0": "Levi's Stadium in the San Francisco Bay Area at Santa Clara,",
"56be8e613aeaaa14008c90d1": "7",
"56be8e613aeaaa14008c90d2": "Levi's",
"56be8e613aeaaa14008c90d3": "7, 2016, at Levi's",
"56bea9923aeaaa14008c91b9": "7",
Looking forward for the help.
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/350/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/349 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/349/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/349/comments | https://api.github.com/repos/huggingface/transformers/issues/349/events | https://github.com/huggingface/transformers/issues/349 | 417,596,167 | MDU6SXNzdWU0MTc1OTYxNjc= | 349 | Unable to train (fine-tuning) BERT with small training set | {
"login": "shuvadibp",
"id": 37171714,
"node_id": "MDQ6VXNlcjM3MTcxNzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/37171714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuvadibp",
"html_url": "https://github.com/shuvadibp",
"followers_url": "https://api.github.com/users/shuvadibp/followers",
"following_url": "https://api.github.com/users/shuvadibp/following{/other_user}",
"gists_url": "https://api.github.com/users/shuvadibp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuvadibp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuvadibp/subscriptions",
"organizations_url": "https://api.github.com/users/shuvadibp/orgs",
"repos_url": "https://api.github.com/users/shuvadibp/repos",
"events_url": "https://api.github.com/users/shuvadibp/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuvadibp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Probably an issue with `t_total` and the number of training optimization steps similarly to #329.\r\nCould you check the number of total training step sent to the optimizer? Which example script are you using?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,551 | 1,557 | 1,557 | NONE | null | I am trying to train BERT with 1 context and 1 answer in the train.json, I am getting the below error.
_lr_this_step = args.learning_rate * warmup_linear(global_step/t_total, args.warmup_proportion)
ZeroDivisionError: division by zero_
After training with 1 context and 5 answers, the error is avoided, but I do not see any change in the answer obtained from BERT. Please help regarding this and let me know if anyone has tried this fine tuning training.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/349/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/348 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/348/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/348/comments | https://api.github.com/repos/huggingface/transformers/issues/348/events | https://github.com/huggingface/transformers/pull/348 | 417,472,951 | MDExOlB1bGxSZXF1ZXN0MjU4NDYyNzI3 | 348 | output data | {
"login": "athorneak13",
"id": 43690738,
"node_id": "MDQ6VXNlcjQzNjkwNzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/43690738?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/athorneak13",
"html_url": "https://github.com/athorneak13",
"followers_url": "https://api.github.com/users/athorneak13/followers",
"following_url": "https://api.github.com/users/athorneak13/following{/other_user}",
"gists_url": "https://api.github.com/users/athorneak13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/athorneak13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/athorneak13/subscriptions",
"organizations_url": "https://api.github.com/users/athorneak13/orgs",
"repos_url": "https://api.github.com/users/athorneak13/repos",
"events_url": "https://api.github.com/users/athorneak13/events{/privacy}",
"received_events_url": "https://api.github.com/users/athorneak13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Wrong upstream I guess. Closing."
] | 1,551 | 1,551 | 1,551 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/348/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/348",
"html_url": "https://github.com/huggingface/transformers/pull/348",
"diff_url": "https://github.com/huggingface/transformers/pull/348.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/348.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/347 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/347/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/347/comments | https://api.github.com/repos/huggingface/transformers/issues/347/events | https://github.com/huggingface/transformers/pull/347 | 417,468,974 | MDExOlB1bGxSZXF1ZXN0MjU4NDU5NjA3 | 347 | Processor for SST-2 task | {
"login": "jplehmann",
"id": 460964,
"node_id": "MDQ6VXNlcjQ2MDk2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/460964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplehmann",
"html_url": "https://github.com/jplehmann",
"followers_url": "https://api.github.com/users/jplehmann/followers",
"following_url": "https://api.github.com/users/jplehmann/following{/other_user}",
"gists_url": "https://api.github.com/users/jplehmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplehmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplehmann/subscriptions",
"organizations_url": "https://api.github.com/users/jplehmann/orgs",
"repos_url": "https://api.github.com/users/jplehmann/repos",
"events_url": "https://api.github.com/users/jplehmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplehmann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @jplehmann!"
] | 1,551 | 1,551 | 1,551 | CONTRIBUTOR | null | Added a processor for SST-2 to the `run_classifier` script.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/347/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/347",
"html_url": "https://github.com/huggingface/transformers/pull/347",
"diff_url": "https://github.com/huggingface/transformers/pull/347.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/347.patch",
"merged_at": 1551862107000
} |
https://api.github.com/repos/huggingface/transformers/issues/346 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/346/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/346/comments | https://api.github.com/repos/huggingface/transformers/issues/346/events | https://github.com/huggingface/transformers/issues/346 | 417,441,346 | MDU6SXNzdWU0MTc0NDEzNDY= | 346 | MRPC Score Lower than Expected | {
"login": "jplehmann",
"id": 460964,
"node_id": "MDQ6VXNlcjQ2MDk2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/460964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplehmann",
"html_url": "https://github.com/jplehmann",
"followers_url": "https://api.github.com/users/jplehmann/followers",
"following_url": "https://api.github.com/users/jplehmann/following{/other_user}",
"gists_url": "https://api.github.com/users/jplehmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplehmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplehmann/subscriptions",
"organizations_url": "https://api.github.com/users/jplehmann/orgs",
"repos_url": "https://api.github.com/users/jplehmann/repos",
"events_url": "https://api.github.com/users/jplehmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplehmann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Argh, just realized I was on `0.3.0` which is what pip installed due to some dependencies. Upgrading to `0.6.1` and now I'm getting expected scores:\r\n```\r\neval_accuracy = 0.8529411764705882\r\neval_loss = 0.39120761538837473\r\nglobal_step = 345\r\nloss = 0.17308216924252717\r\n\r\neval_accuracy = 0.8431372549019608\r\neval_loss = 0.49456917187746835\r\nglobal_step = 345\r\nloss = 0.12193756103515625\r\n\r\neval_accuracy = 0.875\r\neval_loss = 0.4023934503396352\r\nglobal_step = 345\r\nloss = 0.14832657523777174\r\n\r\neval_accuracy = 0.8553921568627451\r\neval_loss = 0.44353585865567713\r\nglobal_step = 345\r\nloss = 0.17814078952955162\r\n```"
] | 1,551 | 1,551 | 1,551 | CONTRIBUTOR | null | I expect to see MRPC scores between 84-88% as advertised. What I am seeing with different seeds is 79-84% consistently. (I thought perhaps the weight initialization was the issue but seems not to be the case #339.)
I am running with the provided command and fp16, using a GCE instance with a Tesla T4.
> time python run_classifier.py --task_name MRPC --do_train --do_eval --do_lower_case --data_dir $GLUE_DIR/MRPC/ --bert_model bert-base-uncased --max_seq_length 128 --train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/mrpc_output18/ --seed 18 --fp16
Example outputs:
```
mrpc_output15/eval_results.txt
eval_accuracy = 0.803921568627451
eval_loss = 0.44585930088571474
global_step = 345
loss = 0.42245775305706523
mrpc_output16/eval_results.txt
eval_accuracy = 0.8455882352941176
eval_loss = 0.38226841594658645
global_step = 345
loss = 0.25925399116847825
mrpc_output17/eval_results.txt
eval_accuracy = 0.7916666666666666
eval_loss = 0.4917685123635273
global_step = 345
loss = 0.24811905570652174
mrpc_output3/eval_results.txt
eval_accuracy = 0.8431372549019608
eval_loss = 0.42019053533965467
global_step = 345
loss = 0.2503709876019022
mrpc_output42/eval_results.txt
eval_accuracy = 0.8406862745098039
eval_loss = 0.44909875124108556
global_step = 345
loss = 0.21309310249660326
mrpc_output44/eval_results.txt
eval_accuracy = 0.8406862745098039
eval_loss = 0.45059946084431574
global_step = 345
loss = 0.10150747223068839
mrpc_output45/eval_results.txt
eval_accuracy = 0.8063725490196079
eval_loss = 0.42597512491777834
global_step = 345
loss = 0.29104428498641305
mrpc_output18/eval_results.txt
eval_accuracy = 0.8161764705882353
eval_loss = 0.4096583215629353
global_step = 345
```
Other output:
```
03/05/2019 18:23:31 - INFO - pytorch_pretrained_bert.modeling - Model config {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 30522
}
```
Any ideas on why this is the case? Happy to provide more output. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/346/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/345 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/345/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/345/comments | https://api.github.com/repos/huggingface/transformers/issues/345/events | https://github.com/huggingface/transformers/issues/345 | 417,196,931 | MDU6SXNzdWU0MTcxOTY5MzE= | 345 | Not able to import RandomSampler, Getting error "ImportError: cannot import name 'RandomSampler'"? | {
"login": "Linkyx",
"id": 10572007,
"node_id": "MDQ6VXNlcjEwNTcyMDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/10572007?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Linkyx",
"html_url": "https://github.com/Linkyx",
"followers_url": "https://api.github.com/users/Linkyx/followers",
"following_url": "https://api.github.com/users/Linkyx/following{/other_user}",
"gists_url": "https://api.github.com/users/Linkyx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Linkyx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Linkyx/subscriptions",
"organizations_url": "https://api.github.com/users/Linkyx/orgs",
"repos_url": "https://api.github.com/users/Linkyx/repos",
"events_url": "https://api.github.com/users/Linkyx/events{/privacy}",
"received_events_url": "https://api.github.com/users/Linkyx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"how do you fix this issue?",
"> how do you fix this issue?\r\n\r\ntry to update your torch version,i found it didn't work in torch 4.0.0, try \"torch >=4.0.1\""
] | 1,551 | 1,552 | 1,551 | NONE | null | Not able to import RandomSampler, Getting error "ImportError: cannot import name 'RandomSampler'"? Did I get a wrong torch version? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/345/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/345/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/344 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/344/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/344/comments | https://api.github.com/repos/huggingface/transformers/issues/344/events | https://github.com/huggingface/transformers/issues/344 | 417,137,321 | MDU6SXNzdWU0MTcxMzczMjE= | 344 | BertEmbedding not initialized with `padding_idx=0` | {
"login": "haozheji",
"id": 25786613,
"node_id": "MDQ6VXNlcjI1Nzg2NjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/25786613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haozheji",
"html_url": "https://github.com/haozheji",
"followers_url": "https://api.github.com/users/haozheji/followers",
"following_url": "https://api.github.com/users/haozheji/following{/other_user}",
"gists_url": "https://api.github.com/users/haozheji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haozheji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haozheji/subscriptions",
"organizations_url": "https://api.github.com/users/haozheji/orgs",
"repos_url": "https://api.github.com/users/haozheji/repos",
"events_url": "https://api.github.com/users/haozheji/events{/privacy}",
"received_events_url": "https://api.github.com/users/haozheji/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could be, do you want to submit a PR to update this?",
"Closed by #358, thanks @cdjhz!"
] | 1,551 | 1,552 | 1,552 | CONTRIBUTOR | null | https://github.com/huggingface/pytorch-pretrained-BERT/blob/2152bfeae82439600dc5b5deab057a3c4331c62d/pytorch_pretrained_bert/modeling.py#L696
The bert-embeddings are not initialized with `padding_idx=0`, which may potentially result in none zero embeddings for zeros paddings in some early version of pytorch. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/344/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/344/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/343 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/343/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/343/comments | https://api.github.com/repos/huggingface/transformers/issues/343/events | https://github.com/huggingface/transformers/issues/343 | 417,107,904 | MDU6SXNzdWU0MTcxMDc5MDQ= | 343 | Tokenizer defaults lowercase even when bert_model is cased | {
"login": "chmccreery",
"id": 25408780,
"node_id": "MDQ6VXNlcjI1NDA4Nzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/25408780?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chmccreery",
"html_url": "https://github.com/chmccreery",
"followers_url": "https://api.github.com/users/chmccreery/followers",
"following_url": "https://api.github.com/users/chmccreery/following{/other_user}",
"gists_url": "https://api.github.com/users/chmccreery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chmccreery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chmccreery/subscriptions",
"organizations_url": "https://api.github.com/users/chmccreery/orgs",
"repos_url": "https://api.github.com/users/chmccreery/repos",
"events_url": "https://api.github.com/users/chmccreery/events{/privacy}",
"received_events_url": "https://api.github.com/users/chmccreery/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,551 | 1,557 | 1,557 | NONE | null | https://github.com/huggingface/pytorch-pretrained-BERT/blob/2152bfeae82439600dc5b5deab057a3c4331c62d/pytorch_pretrained_bert/tokenization.py#L77
A more clear behavior would be to use whether or not 'uncased' is in bert_model, and set the default behavior of do_lower_case accordingly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/343/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/huggingface/transformers/issues/343/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/342 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/342/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/342/comments | https://api.github.com/repos/huggingface/transformers/issues/342/events | https://github.com/huggingface/transformers/issues/342 | 417,050,205 | MDU6SXNzdWU0MTcwNTAyMDU= | 342 | Usage example needs [CLS] and [SEP] added post-tokenization | {
"login": "jplehmann",
"id": 460964,
"node_id": "MDQ6VXNlcjQ2MDk2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/460964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplehmann",
"html_url": "https://github.com/jplehmann",
"followers_url": "https://api.github.com/users/jplehmann/followers",
"following_url": "https://api.github.com/users/jplehmann/following{/other_user}",
"gists_url": "https://api.github.com/users/jplehmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplehmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplehmann/subscriptions",
"organizations_url": "https://api.github.com/users/jplehmann/orgs",
"repos_url": "https://api.github.com/users/jplehmann/repos",
"events_url": "https://api.github.com/users/jplehmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplehmann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Argh, I just realized that due to dependency conflicts, pip had installed an old version `0.3.0`.\r\n\r\nWas fixed here:\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/issues/303\r\n"
] | 1,551 | 1,551 | 1,551 | CONTRIBUTOR | null | Since probably #176, the usage example results in the special tokens getting normalized in a bad way and the assertion clearly fails.
```
['[',
'cl',
'##s',
']',
'who',
'was',
'jim',
'henson',
'[MASK]',
'[',
'sep',
']',
'jim',
'henson',
'was',
'a',
'puppet',
'##eer',
'[',
'sep',
']']
```
I believe something like this is intended:
```
text1 = "Who was Jim Henson ?"
text2 = "Jim Henson was a puppeteer"
tokenized_text = ['[CLS]'] + tokenizer.tokenize(text1) + ['[SEP]'] + tokenizer.tokenize(text2) + ['[SEP]']
```
I really appreciate this project!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/342/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/341 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/341/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/341/comments | https://api.github.com/repos/huggingface/transformers/issues/341/events | https://github.com/huggingface/transformers/pull/341 | 417,023,120 | MDExOlB1bGxSZXF1ZXN0MjU4MTEzNTQ3 | 341 | catch exception if pathlib not install | {
"login": "potatochip",
"id": 10922120,
"node_id": "MDQ6VXNlcjEwOTIyMTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/10922120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/potatochip",
"html_url": "https://github.com/potatochip",
"followers_url": "https://api.github.com/users/potatochip/followers",
"following_url": "https://api.github.com/users/potatochip/following{/other_user}",
"gists_url": "https://api.github.com/users/potatochip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/potatochip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/potatochip/subscriptions",
"organizations_url": "https://api.github.com/users/potatochip/orgs",
"repos_url": "https://api.github.com/users/potatochip/repos",
"events_url": "https://api.github.com/users/potatochip/events{/privacy}",
"received_events_url": "https://api.github.com/users/potatochip/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,551 | 1,551 | 1,551 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/341/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/341",
"html_url": "https://github.com/huggingface/transformers/pull/341",
"diff_url": "https://github.com/huggingface/transformers/pull/341.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/341.patch",
"merged_at": 1551862082000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/340 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/340/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/340/comments | https://api.github.com/repos/huggingface/transformers/issues/340/events | https://github.com/huggingface/transformers/issues/340 | 416,582,484 | MDU6SXNzdWU0MTY1ODI0ODQ= | 340 | optimizer.zero_grad() in run_openai_gpt.py? | {
"login": "jaminche",
"id": 21203744,
"node_id": "MDQ6VXNlcjIxMjAzNzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/21203744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaminche",
"html_url": "https://github.com/jaminche",
"followers_url": "https://api.github.com/users/jaminche/followers",
"following_url": "https://api.github.com/users/jaminche/following{/other_user}",
"gists_url": "https://api.github.com/users/jaminche/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaminche/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaminche/subscriptions",
"organizations_url": "https://api.github.com/users/jaminche/orgs",
"repos_url": "https://api.github.com/users/jaminche/repos",
"events_url": "https://api.github.com/users/jaminche/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaminche/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh that's a mistake indeed, thanks for pointing out.\r\nFixed on master."
] | 1,551 | 1,551 | 1,551 | NONE | null | In `run_openai_gpt.py`, should there be a call to `optimizer.zero_grad()` after updating parameters so that we zero out the gradients between minibatches?
https://github.com/huggingface/pytorch-pretrained-BERT/blob/2152bfeae82439600dc5b5deab057a3c4331c62d/examples/run_openai_gpt.py#L212 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/340/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/339 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/339/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/339/comments | https://api.github.com/repos/huggingface/transformers/issues/339/events | https://github.com/huggingface/transformers/issues/339 | 416,480,812 | MDU6SXNzdWU0MTY0ODA4MTI= | 339 | Why the weights are not intialized ? | {
"login": "lemo2012",
"id": 2843040,
"node_id": "MDQ6VXNlcjI4NDMwNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2843040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lemo2012",
"html_url": "https://github.com/lemo2012",
"followers_url": "https://api.github.com/users/lemo2012/followers",
"following_url": "https://api.github.com/users/lemo2012/following{/other_user}",
"gists_url": "https://api.github.com/users/lemo2012/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lemo2012/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lemo2012/subscriptions",
"organizations_url": "https://api.github.com/users/lemo2012/orgs",
"repos_url": "https://api.github.com/users/lemo2012/repos",
"events_url": "https://api.github.com/users/lemo2012/events{/privacy}",
"received_events_url": "https://api.github.com/users/lemo2012/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I was wondering this myself. It looks like there's some configuration mismatch -- some parameters found which aren't used, and a few expected that aren't found.\r\n\r\nI'm not sure if this is expected, since the top-level task-specific classifier is correctly NOT pre-trained... or if it's something more.\r\n\r\n(question about lower performance moved into new issue)\r\n",
"I dug up some related issues which confirms my guess above -- this kind of message is expected since the models are not yet find-tuned to the task.\r\n\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/issues/161\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/issues/180\r\n\r\n",
"Yes this is the expected behavior.\r\nI don't want to make the warning messages says this is \"all good\" because in some case, depending on the model you are loading in, this could be an unwanted behavior (not loading all the weights).",
"Hello @thomwolf : I continued pre-training with bert-base-uncased without fine tuning on round about 22K sequences and the precision @ K for MaskedLM task did not change at all. Is the result legitimate or do I rather have a problem loading the weights? I received the same warning message/ INFO. The data set is from the automotive domain. At what point can I expect the weights to change? Thank you very much for experience values. ",
"@viva2202, I did the same here using directly the \"run_language_modeling.py\" script, but with 11k sequences (I continued pretraining using training data only), and then fine-tuned it using BertForSequenceClassification. Got 1.75% increase in accuracy compared to not continuing pretraining."
] | 1,551 | 1,592 | 1,551 | NONE | null | 03/03/2019 14:13:01 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForMultiLabelSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
03/03/2019 14:13:01 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForMultiLabelSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias'] | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/339/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/339/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/338 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/338/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/338/comments | https://api.github.com/repos/huggingface/transformers/issues/338/events | https://github.com/huggingface/transformers/pull/338 | 416,479,787 | MDExOlB1bGxSZXF1ZXN0MjU3NzE0NTI3 | 338 | Fix top k generation for k != 0 | {
"login": "CatalinVoss",
"id": 332459,
"node_id": "MDQ6VXNlcjMzMjQ1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/332459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CatalinVoss",
"html_url": "https://github.com/CatalinVoss",
"followers_url": "https://api.github.com/users/CatalinVoss/followers",
"following_url": "https://api.github.com/users/CatalinVoss/following{/other_user}",
"gists_url": "https://api.github.com/users/CatalinVoss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CatalinVoss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CatalinVoss/subscriptions",
"organizations_url": "https://api.github.com/users/CatalinVoss/orgs",
"repos_url": "https://api.github.com/users/CatalinVoss/repos",
"events_url": "https://api.github.com/users/CatalinVoss/events{/privacy}",
"received_events_url": "https://api.github.com/users/CatalinVoss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @CatalinVoss!"
] | 1,551 | 1,563 | 1,551 | CONTRIBUTOR | null | Seems like the shapes didn't line up for the comparison. Logits are `(batch_size, values)`. The minima had shape `(batchsize)` and couldn't be directly compared | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/338/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/338",
"html_url": "https://github.com/huggingface/transformers/pull/338",
"diff_url": "https://github.com/huggingface/transformers/pull/338.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/338.patch",
"merged_at": 1551862053000
} |
https://api.github.com/repos/huggingface/transformers/issues/337 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/337/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/337/comments | https://api.github.com/repos/huggingface/transformers/issues/337/events | https://github.com/huggingface/transformers/pull/337 | 416,458,040 | MDExOlB1bGxSZXF1ZXN0MjU3NzAyNTA0 | 337 | Allow tokenization of sequences > 512 for caching | {
"login": "CatalinVoss",
"id": 332459,
"node_id": "MDQ6VXNlcjMzMjQ1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/332459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CatalinVoss",
"html_url": "https://github.com/CatalinVoss",
"followers_url": "https://api.github.com/users/CatalinVoss/followers",
"following_url": "https://api.github.com/users/CatalinVoss/following{/other_user}",
"gists_url": "https://api.github.com/users/CatalinVoss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CatalinVoss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CatalinVoss/subscriptions",
"organizations_url": "https://api.github.com/users/CatalinVoss/orgs",
"repos_url": "https://api.github.com/users/CatalinVoss/repos",
"events_url": "https://api.github.com/users/CatalinVoss/events{/privacy}",
"received_events_url": "https://api.github.com/users/CatalinVoss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"OK, done. Thanks!",
"Nice, thanks @CatalinVoss (and @rodgzilla)!"
] | 1,551 | 1,563 | 1,551 | CONTRIBUTOR | null | For many applications requiring randomized data access, it's easier to cache the tokenized representations than the words. So why not turn this into a warning? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/337/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/337",
"html_url": "https://github.com/huggingface/transformers/pull/337",
"diff_url": "https://github.com/huggingface/transformers/pull/337.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/337.patch",
"merged_at": 1551861950000
} |
https://api.github.com/repos/huggingface/transformers/issues/336 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/336/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/336/comments | https://api.github.com/repos/huggingface/transformers/issues/336/events | https://github.com/huggingface/transformers/issues/336 | 416,450,176 | MDU6SXNzdWU0MTY0NTAxNzY= | 336 | F1 and EM scores output for run_squad.py | {
"login": "mingkkkkk",
"id": 48165602,
"node_id": "MDQ6VXNlcjQ4MTY1NjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/48165602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mingkkkkk",
"html_url": "https://github.com/mingkkkkk",
"followers_url": "https://api.github.com/users/mingkkkkk/followers",
"following_url": "https://api.github.com/users/mingkkkkk/following{/other_user}",
"gists_url": "https://api.github.com/users/mingkkkkk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mingkkkkk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mingkkkkk/subscriptions",
"organizations_url": "https://api.github.com/users/mingkkkkk/orgs",
"repos_url": "https://api.github.com/users/mingkkkkk/repos",
"events_url": "https://api.github.com/users/mingkkkkk/events{/privacy}",
"received_events_url": "https://api.github.com/users/mingkkkkk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
}
] | closed | false | null | [] | [
"Use the Squad python scripts available on their website",
"Is that run_squad.py? I used that one but didn’t see output scores, having\nthe output predictions files though. Thanks!\n\nabeljim <[email protected]>于2019年3月3日 周日上午3:03写道:\n\n> Use the Squad python scripts available on their website\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-pretrained-BERT/issues/336#issuecomment-469011833>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/At7y4r3yC9C41y_BrZptqWiUtrmLwnuMks5vS6vtgaJpZM4baqZe>\n> .\n>\n",
"https://rajpurkar.github.io/SQuAD-explorer/ get the eval script for the correct version",
"Thanks so much, really helps!\n\nabeljim <[email protected]>于2019年3月3日 周日上午3:08写道:\n\n> https://rajpurkar.github.io/SQuAD-explorer/ get the eval script for the\n> correct version\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-pretrained-BERT/issues/336#issuecomment-469012224>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/At7y4u9BkK-km5KUXJuBfzM3loMnr7gwks5vS60wgaJpZM4baqZe>\n> .\n>\n",
"@abeljim Hi Abel, \r\n\r\nI got a very strange problem in running the prediction only for run_squad.py and wonder if you have any idea about why this happens.\r\n I first ran the following codes to both do train and predict on the files:\r\n `python run_squad_final.py --bert_model bert-base-uncased --do_train --do_predict --do_lower_case --train_file train-v2.0.json --predict_file dev-v2.0.json --train_batch_size 6 --learning_rate 3e-5 --num_train_epochs 1.0 --max_seq_length 384 --doc_stride 128 --fp16 --version_2_with_negative --null_score_diff_threshold -1 --output_dir ./temp/\r\n/`\r\n\r\n The output predictions.json file looks normal, but when I tried to delete the \"--do_train\" part and only do the prediction on the same file, it gives very different and strange outputs, many of the answers are repetitive as below and the scores are like only 0.1:\r\n(The output in predictions are like:)\r\n\r\n> \"68cf05f67fd29c6f129fe2fb9\": \"mands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their\",\r\n> \"f5fead9187d56af2bdbfcb921\": \"mands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their\",\r\n> \"f9183ead5bb93aaa12ea37245\": \"mands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their\",\r\n> \"d583847c96cbbbfaaa99dfcad\": \"Rollo, agreed to swear fealty\",\r\n> \"e06fbbdb50af7ab3faefde618\": \"Rollo, agreed to swear fealty\",\r\n> \"cbb48eaacbbbfcccefb7aab7f\": \"Rollo, agreed to swear fealty\",\r\n\r\nDo you know what caused the problem?",
"Yeah sorry I forgot to respond. The way the it runs if the train flag is off that it will load a pretrained version of bert and run the prediction on that. A way to get this to work is to modify the file to load a saved trained version instead. I could add this functionality but I'm busy with school for the next two weeks. Modify lines 1011 to 1025 for a quick fix"
] | 1,551 | 1,552 | 1,552 | NONE | null | Hi,
I was doing prediction after fine-tuning the bert-base model and I was wondering whether the f1 and em scores will show automatically since I only saw the following two log outputs
03/02/2019 22:20:05 - INFO - __main__ - Writing predictions to: /tmp/debug_squad/predictions.json
03/02/2019 22:20:05 - INFO - __main__ - Writing nbest to: /tmp/debug_squad/nbest_predictions.json
Where am I able to get those scores? Thanks for any help!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/336/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/335 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/335/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/335/comments | https://api.github.com/repos/huggingface/transformers/issues/335/events | https://github.com/huggingface/transformers/issues/335 | 416,195,621 | MDU6SXNzdWU0MTYxOTU2MjE= | 335 | Feature Request: GPT2 fine tuning | {
"login": "yet-another-account",
"id": 10374151,
"node_id": "MDQ6VXNlcjEwMzc0MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/10374151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yet-another-account",
"html_url": "https://github.com/yet-another-account",
"followers_url": "https://api.github.com/users/yet-another-account/followers",
"following_url": "https://api.github.com/users/yet-another-account/following{/other_user}",
"gists_url": "https://api.github.com/users/yet-another-account/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yet-another-account/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yet-another-account/subscriptions",
"organizations_url": "https://api.github.com/users/yet-another-account/orgs",
"repos_url": "https://api.github.com/users/yet-another-account/repos",
"events_url": "https://api.github.com/users/yet-another-account/events{/privacy}",
"received_events_url": "https://api.github.com/users/yet-another-account/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, feel free to open a PR if you want.\r\nIt's just a regular PyTorch model so all the standard ways of training a PyTorch model work.",
"Is is possible to fine-tune GPT2 on downstream tasks currently?",
"same questions"
] | 1,551 | 1,562 | 1,551 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/335/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/334 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/334/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/334/comments | https://api.github.com/repos/huggingface/transformers/issues/334/events | https://github.com/huggingface/transformers/issues/334 | 415,998,482 | MDU6SXNzdWU0MTU5OTg0ODI= | 334 | pip install [--editable] . ---> Error | {
"login": "Esaada",
"id": 23050230,
"node_id": "MDQ6VXNlcjIzMDUwMjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/23050230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Esaada",
"html_url": "https://github.com/Esaada",
"followers_url": "https://api.github.com/users/Esaada/followers",
"following_url": "https://api.github.com/users/Esaada/following{/other_user}",
"gists_url": "https://api.github.com/users/Esaada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Esaada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Esaada/subscriptions",
"organizations_url": "https://api.github.com/users/Esaada/orgs",
"repos_url": "https://api.github.com/users/Esaada/repos",
"events_url": "https://api.github.com/users/Esaada/events{/privacy}",
"received_events_url": "https://api.github.com/users/Esaada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Did you run python install --editable .",
"There's a way to install cloned repositories with pip, but the easiest way is to use plain python for this:\r\n\r\nAfter cloning and changing into the pytorch-pretrained-BERT directory, run `python setup.py develop`.",
"Yes, please follow the installation instructions on the readme [here](https://github.com/huggingface/pytorch-pretrained-BERT#installation)",
"@thomwolf \r\nI have exactly the same problem after following readme installation (mentioned). I am using pytorch. \r\n\r\npython -m pytest -sv ./transformers/tests/ have two failed tests.\r\n\r\ntransformers/tests/modeling_bert_test.py::BertModelTest::test_bert_model PASSED\r\ntransformers/tests/modeling_bert_test.py::BertModelTest::test_bert_model_as_decoder FAILED\r\ntransformers/tests/modeling_bert_test.py::BertModelTest::test_config PASSED\r\ntransformers/tests/modeling_bert_test.py::BertModelTest::test_determinism PASSED\r\ntransformers/tests/modeling_bert_test.py::BertModelTest::test_for_masked_lm PASSED\r\ntransformers/tests/modeling_bert_test.py::BertModelTest::test_for_masked_lm_decoder FAILED\r\ntransformers/tests/modeling_bert_test.py::BertModelTest::test_for_multiple_choice PASSED\r\n\r\n======================================================= 2 failed, 403 passed, 227 skipped, 36 warnings in 49.14s ======================================================\r\n\r\n@bheinzerling,\r\npython setup.py develop can go through ok. But the test result is the same as above: two are two failed tests.\r\n\r\n\r\n",
"Anybody know why \"pip install [--editable] .\" failed here? It is some missing python package needed for this?",
"Please open a command line and enter `pip install git+https://github.com/huggingface/transformers.git` for installing Transformers library from source. However, **Transformers v-2.2.0 has been just released yesterday** and you can install it from PyPi with `pip install transformers`\r\n\r\nTry to install this latest version and launch the tests suite and keep us updated on the result!\r\n\r\n> Anybody know why \"pip install [--editable] .\" failed here? It is some missing python package needed for this?",
"@TheEdoardo93 \r\nThis is indeed the latest version installed( installed a few hours before)\r\n\r\nName: transformers\r\nVersion: 2.2.0\r\nSummary: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch\r\nHome-page: https://github.com/huggingface/transformers\r\nAuthor: Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Google AI Language Team Authors, Open AI team Authors, Facebook AI Authors, Carnegie Mellon University Authors\r\nAuthor-email: [email protected]\r\nLicense: Apache\r\nLocation: /home/pcl/venvpytorch/lib/python3.6/site-packages\r\nRequires: sacremoses, numpy, requests, boto3, regex, tqdm, sentencepiece\r\nRequired-by: \r\n",
"@TheEdoardo93\r\nAfter uninstall and reinstall with pip install git+https://github.com/huggingface/transformers.git.\r\nStill the same results as before (two are failed)\r\n\r\n======================================================= 2 failed, 403 passed, 227 skipped, 36 warnings in 49.13s =======\r\n\r\nName: transformers\r\nVersion: 2.2.0\r\nSummary: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch\r\nHome-page: https://github.com/huggingface/transformers\r\nAuthor: Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Google AI Language Team Authors, Open AI team Authors, Facebook AI Authors, Carnegie Mellon University Authors\r\nAuthor-email: [email protected]\r\nLicense: Apache\r\nLocation: /home/pcl/venvpytorch/opensource/transformers\r\nRequires: numpy, boto3, requests, tqdm, regex, sentencepiece, sacremoses\r\nRequired-by: \r\n",
"When I've executed `python -m pytest -sv ./transformers/tests/`, I've obtained the following result: `595 passed, 37 skipped, 36 warnings in 427.58s (0:07:07)`.\r\nWhen I've executed `python -m pytest -sv ./examples/`, I've obtained the following result: `15 passed, 7 warnings in 77.09s (0:01:17)`.\r\n\r\n> @TheEdoardo93\r\n> After uninstall and reinstall with pip install git+https://github.com/huggingface/transformers.git.\r\n> Still the same results as before (two are failed)\r\n> \r\n> ======================================================= 2 failed, 403 passed, 227 skipped, 36 warnings in 49.13s =======\r\n> \r\n> Name: transformers\r\n> Version: 2.2.0\r\n> Summary: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch\r\n> Home-page: https://github.com/huggingface/transformers\r\n> Author: Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Google AI Language Team Authors, Open AI team Authors, Facebook AI Authors, Carnegie Mellon University Authors\r\n> Author-email: [[email protected]](mailto:[email protected])\r\n> License: Apache\r\n> Location: /home/pcl/venvpytorch/opensource/transformers\r\n> Requires: numpy, boto3, requests, tqdm, regex, sentencepiece, sacremoses\r\n> Required-by:",
"I did not install TensorFlow which is the reason for skips. I need reasons for failure. I guess I will install TensorFlow and see how it goes. ",
"In the `README.md` file, Transformers' authors says to install TensorFlow 2.0 and PyTorch 1.0.0+ **before** installing Transformers library.\r\n\r\n> I did not install TensorFlow which is the reason for skips. I need reasons for failure. I guess I will install TensorFlow and see how it goes.",
"\"First you need to install one of, or both, TensorFlow 2.0 and PyTorch.\" I don't think that is the reason for failure. ",
"Hi, I believe these two tests fail with an error similar to:\r\n\r\n```\r\n RuntimeError: expected device cpu and dtype Long but got device cpu and dtype Bool\r\n```\r\n\r\nIf I'm not mistaken you're running with torch 1.2 and we're testing with torch 1.3. This is a bug as we aim to support torch from 1.0.1+. Thank you for raising the issue, you can fix it by installing torch 1.3+ while we work on fixing this.",
"Thanks! Yeah, I found it too by verbose mode. I suddenly remember some\ntensorflow code have similar problem before. In my case,it is some const,\nI just changed it from int to float. Indeed I am using torch1.2. Will\nsee whether it works here or not. Any idea why the pip -e option is\nnot working?\n\nOn Wed, Nov 27, 2019 at 22:49 Lysandre Debut <[email protected]>\nwrote:r\n\n> Hi, I believe these two tests fail with an error similar to:\n>\n> RuntimeError: expected device cpu and dtype Long but got device cpu and dtype Bool\n>\n> If I'm not mistaken you're running with torch 1.2 and we're testing with\n> torch 1.3. This is a bug as we aim to support torch from 1.0.1+. Thank you\n> for raising the issue, you can fix it by installing torch 1.3+ while we\n> work on fixing this.\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/334?email_source=notifications&email_token=AA6O5IG4IUK6Z3ESWAIYOXLQV2CJDA5CNFSM4G3CE3DKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFJXR4I#issuecomment-559118577>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AA6O5IFKBX3QB5AVMTXA5P3QV2CJDANCNFSM4G3CE3DA>\n> .\n>\n",
"The `pip install -e .` is probably working, it's just that some tests are failing due to code not tests on Torch 1.2.0.\r\n\r\nThe install should have worked fine, and you should be fine with using every component in the library with torch 1.2.0 except the decoder architectures on which we are working now. Updating to torch 1.3.0 means it will work with decoder architectures too.",
"1.3 torch must work with cuda10.1? I have 10.0 for tensorflow which is\nstill having problem with 10.1. Thanks for the info. Really appreciate ur\nfast response!\n\nOn Wed, Nov 27, 2019 at 23:23 Lysandre Debut <[email protected]>\nwrote:\n\n> The pip install -e . is probably working, it's just that some tests are\n> failing due to code not tests on Torch 1.2.0.\n>\n> The install should have worked fine, and you should be fine with using\n> every component in the library with torch 1.2.0 except the decoder\n> architectures on which we are working now. Updating to torch 1.3.0 means it\n> will work with decoder architectures too.\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/334?email_source=notifications&email_token=AA6O5ICNJ4IRK65JEA6X2DTQV2GIBA5CNFSM4G3CE3DKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFJ3AOQ#issuecomment-559132730>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AA6O5IDZATDEY7PA5YMYF6TQV2GIBANCNFSM4G3CE3DA>\n> .\n>\n",
"In the official PyTorch documentation, in the [installation](https://pytorch.org/get-started/locally/) section, you can see that you can install PyTorch 1.3 with CUDA 9.2 or CUDA 10.1, so PyTorch 1.3 + CUDA 10.1 works!\r\n\r\n> 1.3 torch must work with cuda10.1? I have 10.0 for tensorflow which is still having problem with 10.1. Thanks for the info. Really appreciate ur fast response!\r\n> […](#)\r\n> On Wed, Nov 27, 2019 at 23:23 Lysandre Debut ***@***.***> wrote: The pip install -e . is probably working, it's just that some tests are failing due to code not tests on Torch 1.2.0. The install should have worked fine, and you should be fine with using every component in the library with torch 1.2.0 except the decoder architectures on which we are working now. Updating to torch 1.3.0 means it will work with decoder architectures too. — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#334?email_source=notifications&email_token=AA6O5ICNJ4IRK65JEA6X2DTQV2GIBA5CNFSM4G3CE3DKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFJ3AOQ#issuecomment-559132730>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AA6O5IDZATDEY7PA5YMYF6TQV2GIBANCNFSM4G3CE3DA> .",
"What is the difference between the following?\r\n- `pip install [--editable] .`\r\n- `pip install -e .`\r\n- `python setup.py develop`\r\n\r\nThe first works doesn't work for me, yet is in the readme. The other two do. If this is system-dependent, shouldn't this be added to the readme?",
"@internetcoffeephone, using square brackets in a command line interface is a [common way](https://en.wikipedia.org/wiki/Command-line_interface#Command_description_syntax) to refer to optional parameters. The first command means that you can either use `pip install .` or `pip install --editable .`",
"@LysandreJik That makes sense, thanks for your answer!\r\n\r\nStill, I'd argue against putting it in the readme like that. Firstly because it doesn't produce a sensible error message - secondly because anyone who wants an editable installation will know about that optional parameter already.\r\n\r\nAs for the difference between the above commands, I found [this](https://stackoverflow.com/a/30306403) page:\r\n\r\n> Try to avoid calling setup.py directly, it will not properly tell pip that you've installed your package.\r\n>With pip install -e:\r\n>>For local projects, the “SomeProject.egg-info” directory is created relative to the project path. This is one advantage over just using setup.py develop, which creates the “egg-info” directly relative the current working directory.",
"I removed `[--editable]` from the instructions because I found them confusing (before stumbling upon this issue)."
] | 1,551 | 1,577 | 1,551 | NONE | null | Hi, when using "pip install [--editable] . ", after cloned the git.
I'm getting this error:
Exception:
Traceback (most recent call last):
File "/venv/lib/python3.5/site-packages/pip/_vendor/packaging/requirements.py", line 93, in __init__
req = REQUIREMENT.parseString(requirement_string)
File "/venv/lib/python3.5/site-packages/pip/_vendor/pyparsing.py", line 1814, in parseString
raise exc
File "/venv/lib/python3.5/site-packages/pip/_vendor/pyparsing.py", line 1804, in parseString
loc, tokens = self._parse( instring, 0 )
File "/venv/lib/python3.5/site-packages/pip/_vendor/pyparsing.py", line 1548, in _parseNoCache
loc,tokens = self.parseImpl( instring, preloc, doActions )
File "/venv/lib/python3.5/site-packages/pip/_vendor/pyparsing.py", line 3722, in parseImpl
loc, exprtokens = e._parse( instring, loc, doActions )
File "/venv/lib/python3.5/site-packages/pip/_vendor/pyparsing.py", line 1552, in _parseNoCache
loc,tokens = self.parseImpl( instring, preloc, doActions )
File "/venv/lib/python3.5/site-packages/pip/_vendor/pyparsing.py", line 3502, in parseImpl
raise ParseException(instring, loc, self.errmsg, self)
pip._vendor.pyparsing.ParseException: Expected stringEnd (at char 11), (line:1, col:12)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/venv/lib/python3.5/site-packages/pip/_internal/cli/base_command.py", line 179, in main
status = self.run(options, args)
File "/venv/lib/python3.5/site-packages/pip/_internal/commands/install.py", line 289, in run
self.name, wheel_cache
File "/venv/lib/python3.5/site-packages/pip/_internal/cli/base_command.py", line 269, in populate_requirement_set
wheel_cache=wheel_cache
File "/venv/lib/python3.5/site-packages/pip/_internal/req/constructors.py", line 280, in install_req_from_line
extras = Requirement("placeholder" + extras_as_string.lower()).extras
File "/venv/lib/python3.5/site-packages/pip/_vendor/packaging/requirements.py", line 97, in __init__
requirement_string[e.loc : e.loc + 8], e.msg
pip._vendor.packaging.requirements.InvalidRequirement: Parse error at "'[--edita'": Expected stringEnd
Did someone saw anything like that? any idea? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/334/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/333 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/333/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/333/comments | https://api.github.com/repos/huggingface/transformers/issues/333/events | https://github.com/huggingface/transformers/issues/333 | 415,994,820 | MDU6SXNzdWU0MTU5OTQ4MjA= | 333 | Add lm and next sentence accuracy for run_lm_finetuning example | {
"login": "weiczhu",
"id": 11749368,
"node_id": "MDQ6VXNlcjExNzQ5MzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/11749368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weiczhu",
"html_url": "https://github.com/weiczhu",
"followers_url": "https://api.github.com/users/weiczhu/followers",
"following_url": "https://api.github.com/users/weiczhu/following{/other_user}",
"gists_url": "https://api.github.com/users/weiczhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weiczhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weiczhu/subscriptions",
"organizations_url": "https://api.github.com/users/weiczhu/orgs",
"repos_url": "https://api.github.com/users/weiczhu/repos",
"events_url": "https://api.github.com/users/weiczhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/weiczhu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, feel free to submit a PR for that."
] | 1,551 | 1,551 | 1,551 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/333/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/332 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/332/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/332/comments | https://api.github.com/repos/huggingface/transformers/issues/332/events | https://github.com/huggingface/transformers/issues/332 | 415,507,500 | MDU6SXNzdWU0MTU1MDc1MDA= | 332 | Train with custom data on bert question answering | {
"login": "navdeep1604",
"id": 25216533,
"node_id": "MDQ6VXNlcjI1MjE2NTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25216533?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/navdeep1604",
"html_url": "https://github.com/navdeep1604",
"followers_url": "https://api.github.com/users/navdeep1604/followers",
"following_url": "https://api.github.com/users/navdeep1604/following{/other_user}",
"gists_url": "https://api.github.com/users/navdeep1604/gists{/gist_id}",
"starred_url": "https://api.github.com/users/navdeep1604/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/navdeep1604/subscriptions",
"organizations_url": "https://api.github.com/users/navdeep1604/orgs",
"repos_url": "https://api.github.com/users/navdeep1604/repos",
"events_url": "https://api.github.com/users/navdeep1604/events{/privacy}",
"received_events_url": "https://api.github.com/users/navdeep1604/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"1. You can put the `pytorch_model.bin` file that was output from your finetuning on squad in some other folder and set that folder as the bert_model='path/to/this/folder'. The folder needs to have the files `bert_config.json` and `vocab.txt` from the first pretrained model you used though.\r\n2. I think you can first train on squad, then use the model to further train on your custom QA dataset, using that model (i.e. set bert_model as explained in 1.)\r\n3. You can read the squad training data with:\r\n\r\n```\r\nimport json\r\ninput_file = 'train-v1.1.json'\r\nwith open(input_file, \"r\", encoding='utf-8') as reader:\r\n input_data = json.load(reader)[\"data\"]\r\n```\r\n\r\nThe input data, under the top level \"data\" tag, holds \"paragraphs\" tags, which in turn holds texts in \"context\" tags, and questions and answers in \"qas\" tags. You can check the structure of the texts/questions/answers like this.\r\n\r\n```\r\nfrom pprint import pprint\r\npprint(input_data[0])\r\n\r\n{'paragraphs': [{'context': 'Architecturally, the school has a Catholic '\r\n \"character. Atop the Main Building's gold dome is \"\r\n 'a golden statue of the Virgin Mary. Immediately '\r\n 'in front of the Main Building and facing it, is a '\r\n 'copper statue of Christ with arms upraised with '\r\n 'the legend \"Venite Ad Me Omnes\". Next to the Main '\r\n 'Building is the Basilica of the Sacred Heart. '\r\n 'Immediately behind the basilica is the Grotto, a '\r\n 'Marian place of prayer and reflection. It is a '\r\n 'replica of the grotto at Lourdes, France where '\r\n 'the Virgin Mary reputedly appeared to Saint '\r\n 'Bernadette Soubirous in 1858. At the end of the '\r\n 'main drive (and in a direct line that connects '\r\n 'through 3 statues and the Gold Dome), is a '\r\n 'simple, modern stone statue of Mary.',\r\n 'qas': [{'answers': [{'answer_start': 515,\r\n 'text': 'Saint Bernadette Soubirous'}],\r\n 'id': '5733be284776f41900661182',\r\n 'question': 'To whom did the Virgin Mary allegedly '\r\n 'appear in 1858 in Lourdes France?'},\r\n {'answers': [{'answer_start': 188,\r\n 'text': 'a copper statue of Christ'}],\r\n 'id': '5733be284776f4190066117f',\r\n 'question': 'What is in front of the Notre Dame Main '\r\n 'Building?'},\r\n {'answers': [{'answer_start': 279,\r\n 'text': 'the Main Building'}],\r\n 'id': '5733be284776f41900661180',\r\n 'question': 'The Basilica of the Sacred heart at '\r\n 'Notre Dame is beside to which '\r\n 'structure?'},\r\n {'answers': [{'answer_start': 381,\r\n 'text': 'a Marian place of prayer and '\r\n 'reflection'}],\r\n 'id': '5733be284776f41900661181',\r\n 'question': 'What is the Grotto at Notre Dame?'},\r\n {'answers': [{'answer_start': 92,\r\n 'text': 'a golden statue of the Virgin '\r\n 'Mary'}],\r\n 'id': '5733be284776f4190066117e',\r\n 'question': 'What sits on top of the Main Building '\r\n 'at Notre Dame?'}]},\r\n {'context': \"As at most other universities, Notre Dame's .... \r\n\r\n(many more context and qas tags are printed here)\r\n```\r\n\r\nThe conversion from your custom data to this format depends on the current format of your data. But if you can create a python dict looking like this with your data, you can make a json file from it and use it as training data in the run_squad.py script.",
"@navdeep1604 or @maxlund or @thomwolf : Was a custom training done and tested? We faced few issues like:\r\n\r\n- After training, previous correct questions started getting wrong.\r\n- All questions are started answering same answer\r\n- All questions started answering something wrong\r\n\r\nWould anyone like to share observations, if same or different problems faced. And curious to know what actions or tricks were made to fix these issues.",
"This might help you setup a QA system with custom data, it's built on top of this repo: https://github.com/cdqa-suite/cdQA",
"Hi @SandeepBhutani , \r\nI faced similar issue, since my custom training data (240 QA pairs) was very less.\r\n",
"Hi, for anyone who has made a custom QA dataset, how did you go about get the start position and end position for the answers or did you already have them easily accessible? I have a large dataset set of questions with corresponding context given by people; however, I don't have the specific answers as there can be many acceptable answers. My goal is to determine whether the context contains an answer to the question (similar to squad 2.0). Preliminary results after fine tuning on Squad 2.0 weren't super great so I wanted to add more examples. Any recs on how I could label my data in the correct format for say a bert or would I need to crowd source labels from a vendor?",
"Hi @cformosa,\r\nThe package for QA system mentioned above also has an annotation tool that can help you with that task:\r\nhttps://github.com/cdqa-suite/cdQA-annotator",
"Thanks for the link @andrelmfarias . I was looking over it and it seems extremely useful for sure. Seems like it will take a long time to generate a large corpus of training data but nevertheless its seems quite helpful. Thanks!",
"Hi @andrelmfarias, thank you for sharing this great resource!\r\n\r\nThe cdQA-suite seems to cater to a specific kind of question answering, as described in your [Medium](https://towardsdatascience.com/how-to-create-your-own-question-answering-system-easily-with-python-2ef8abc8eb5) article. To summarise, it looks for the answer to a question from a collection of documents -- all these documents most likely contain different kinds of information regarding a particular topic. For example, a system built using cdQA could contain 10 different documents regarding 10 different historical periods, and you could ask it a question about any of these time periods, and it would search for the relevant answer within these 10 documents.\r\n\r\nHowever, if the system you want to build is such: you have 10 court orders, and you want to ask the system the same set of questions for each court order. For example:\r\n1. When was the order filed?\r\n2. Who filed the order?\r\n3. Who was the order filed against?\r\n4. Was there a settlement?\r\n\r\nIn this case, I wouldn't want the system to search through every document, but instead look for answers within the document itself. Exactly like SQuaD 2.0. \r\nMy assessment is that I wouldn't be able to build such a system using [cdQA](https://github.com/cdqa-suite/cdQA) but I could use the [cdQA annotator](https://github.com/cdqa-suite/cdQA-annotator) to build my dataset. Is that a sound assessment?\r\n\r\nAlso, I'm curious to hear your thoughts on how feasible it would be to expect good results when the `context` is rather long (anywhere between 2-10 pages). \r\nThank you :)",
"Hi @rsomani95 ,\r\nAs your questions are particularly related to `cdQA` I opened an issue with your questions in our repository to avoid spamming here: https://github.com/cdqa-suite/cdQA/issues/275\r\n\r\nI just answered them there.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,551 | 1,576 | 1,576 | NONE | null | Hi all
I have trained bert question answering on squad v 1 data set. As I was using colab which was slow . so I used 5000 examples from squad and trained the model which took 2 hrs and gave accuracy of 51%. My question is that
1) As i saved pytorch_bin file after trainining. Can i use this new bin file and again train next 5000 from squad.should i replace this bin file with old pytorch bin file created in uncased folder. What steps i need to follow
2) i have a custom data. To train on custom qyestion answer. Do i need to include same dataset(append) in squad / put this new file in training data=custom data.? How can i leverage squad trained model to further train on custom data
3) can anybody help me with script to convert my data to squad format
Detailed steps are appreciated fo leveraging squad traines model and train for custom data on top of same | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/332/reactions",
"total_count": 5,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/332/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/331 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/331/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/331/comments | https://api.github.com/repos/huggingface/transformers/issues/331/events | https://github.com/huggingface/transformers/issues/331 | 415,505,133 | MDU6SXNzdWU0MTU1MDUxMzM= | 331 | Can BERT do the next-word-predict task? As it is bidirectional. | {
"login": "guotong1988",
"id": 4702353,
"node_id": "MDQ6VXNlcjQ3MDIzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4702353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guotong1988",
"html_url": "https://github.com/guotong1988",
"followers_url": "https://api.github.com/users/guotong1988/followers",
"following_url": "https://api.github.com/users/guotong1988/following{/other_user}",
"gists_url": "https://api.github.com/users/guotong1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guotong1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guotong1988/subscriptions",
"organizations_url": "https://api.github.com/users/guotong1988/orgs",
"repos_url": "https://api.github.com/users/guotong1988/repos",
"events_url": "https://api.github.com/users/guotong1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/guotong1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,551 | 1,551 | 1,551 | CONTRIBUTOR | null | How can we edit BERT to do the next-word-predict task?
Thank you very much! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/331/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/330 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/330/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/330/comments | https://api.github.com/repos/huggingface/transformers/issues/330/events | https://github.com/huggingface/transformers/issues/330 | 415,471,564 | MDU6SXNzdWU0MTU0NzE1NjQ= | 330 | Can we fine tune our model on Chinese corpus | {
"login": "kennethliukai",
"id": 7746298,
"node_id": "MDQ6VXNlcjc3NDYyOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7746298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kennethliukai",
"html_url": "https://github.com/kennethliukai",
"followers_url": "https://api.github.com/users/kennethliukai/followers",
"following_url": "https://api.github.com/users/kennethliukai/following{/other_user}",
"gists_url": "https://api.github.com/users/kennethliukai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kennethliukai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennethliukai/subscriptions",
"organizations_url": "https://api.github.com/users/kennethliukai/orgs",
"repos_url": "https://api.github.com/users/kennethliukai/repos",
"events_url": "https://api.github.com/users/kennethliukai/events{/privacy}",
"received_events_url": "https://api.github.com/users/kennethliukai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should probably use the `bert-base-chinese` model to start from.\r\nPlease refer to the original bert tensorflow implementation from Google.\r\nThere are a lot of discussion about chinese models in the issues of this repo."
] | 1,551 | 1,551 | 1,551 | NONE | null | Is this pre-trained BERT good for NER or classification on Chinese corpus?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/330/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/329 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/329/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/329/comments | https://api.github.com/repos/huggingface/transformers/issues/329/events | https://github.com/huggingface/transformers/issues/329 | 415,449,361 | MDU6SXNzdWU0MTU0NDkzNjE= | 329 | run_lm_finetuning - ZeroDivisionError | {
"login": "naga-dsalgo",
"id": 47925301,
"node_id": "MDQ6VXNlcjQ3OTI1MzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/47925301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/naga-dsalgo",
"html_url": "https://github.com/naga-dsalgo",
"followers_url": "https://api.github.com/users/naga-dsalgo/followers",
"following_url": "https://api.github.com/users/naga-dsalgo/following{/other_user}",
"gists_url": "https://api.github.com/users/naga-dsalgo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/naga-dsalgo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naga-dsalgo/subscriptions",
"organizations_url": "https://api.github.com/users/naga-dsalgo/orgs",
"repos_url": "https://api.github.com/users/naga-dsalgo/repos",
"events_url": "https://api.github.com/users/naga-dsalgo/events{/privacy}",
"received_events_url": "https://api.github.com/users/naga-dsalgo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Seems like an error with `t_total`. `t_total` is the number of training optimization steps of the optimizer defined [here](num_train_optimization_steps) in the `run_lm_finetuning` example.\r\nCan you make sure it's not zero?",
"Your `batch_size` of 32 is too big for such a small `train_file`, i.e. sample_text.txt. Try setting `batch_size` to 16 or send in a larger train file with more text. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,551 | 1,557 | 1,557 | NONE | null | Trying to get run_lm_finetunning example on working on below GPU machine but finding getting zeroDivisonError.Any idea what could be causing this error?

python /home/ec2-user/SageMaker/bert_pytorch/pytorch-pretrained-BERT/examples/run_lm_finetuning.py --bert_model bert-base-uncased --do_train --train_file /home/ec2-user/SageMaker/bert_pytorch/pytorch-pretrained-BERT/samples/sample_text.txt --output_dir /home/ec2-user/SageMaker/bert_pytorch/pytorch-pretrained-BERT/models --num_train_epochs 5.0 --learning_rate 3e-5 --train_batch_size 32 --max_seq_length 32

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/329/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/328 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/328/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/328/comments | https://api.github.com/repos/huggingface/transformers/issues/328/events | https://github.com/huggingface/transformers/issues/328 | 415,323,350 | MDU6SXNzdWU0MTUzMjMzNTA= | 328 | PyTorch Huggingface BERT-NLP for Named Entity Recognition | {
"login": "AshwinAmbal",
"id": 29573024,
"node_id": "MDQ6VXNlcjI5NTczMDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/29573024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AshwinAmbal",
"html_url": "https://github.com/AshwinAmbal",
"followers_url": "https://api.github.com/users/AshwinAmbal/followers",
"following_url": "https://api.github.com/users/AshwinAmbal/following{/other_user}",
"gists_url": "https://api.github.com/users/AshwinAmbal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AshwinAmbal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AshwinAmbal/subscriptions",
"organizations_url": "https://api.github.com/users/AshwinAmbal/orgs",
"repos_url": "https://api.github.com/users/AshwinAmbal/repos",
"events_url": "https://api.github.com/users/AshwinAmbal/events{/privacy}",
"received_events_url": "https://api.github.com/users/AshwinAmbal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
}
] | closed | false | null | [] | [
"I've found a fix to get around this.\r\nRunning the same code with pytorch-pretrained-bert==0.4.0 solves the issue and the performance is restored to normal.\r\nThere's something messing with the model performance in BERT Tokenizer or BERTForTokenClassification in the new update which is affecting the model performance.\r\nHoping that HuggingFace clears this up soon. :)\r\nThanks.",
"> There's something messing with the model performance in BERT Tokenizer or BERTForTokenClassification in the new update which is affecting the model performance.\r\n> Hoping that HuggingFace clears this up soon. :)\r\n\r\nSounds like the issue should remain open?",
"Oh. I didn't know I closed the issue. Let me reopen it now.\n\nThanks.\n\nOn Tue, 5 Mar, 2019, 10:57 AM John Lehmann, <[email protected]>\nwrote:\n\n> There's something messing with the model performance in BERT Tokenizer or\n> BERTForTokenClassification in the new update which is affecting the model\n> performance.\n> Hoping that HuggingFace clears this up soon. :)\n>\n> Sounds like the issue should remain open?\n>\n> —\n> You are receiving this because you modified the open/close state.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-pretrained-BERT/issues/328#issuecomment-469814216>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AcM_oJw9xYJ3ppFG_egEJTrMQq22ERKhks5vTr4wgaJpZM4bVcDQ>\n> .\n>\n",
"Sorry about that. Didn't realise I closed the issue.\r\nReopened it now. :)",
"Seems strange that the tokenization changed.\r\n\r\nSo you were only having sequence with less than 512 tokens before and now some sequences are longer?\r\n\r\nWithout having access to your dataset I can't really help you but if you can compare the tokenized sequences in your dataset with pytorch-pretrained-bert==0.4.0 versus sequences tokenized with the current pytorch-pretrained-bert==0.6.1 to identify a sequence which is tokenized differently it could help find the root of the issue.\r\n\r\nThen maybe you can just post some part of a sequence or example which is tokenized differently without breaching your confidentiality clause?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I had the same issue when trying to use it with Flair for text classification. Can I know the root cause of this issue? Does this mean that my text part in the dataset is too long? ",
"Yes, BERT only accepts inputs smaller or equal to 512 tokens.",
"> Seems strange that the tokenization changed.\r\n> \r\n> So you were only having sequence with less than 512 tokens before and now some sequences are longer?\r\n> \r\n> Without having access to your dataset I can't really help you but if you can compare the tokenized sequences in your dataset with pytorch-pretrained-bert==0.4.0 versus sequences tokenized with the current pytorch-pretrained-bert==0.6.1 to identify a sequence which is tokenized differently it could help find the root of the issue.\r\n> \r\n> Then maybe you can just post some part of a sequence or example which is tokenized differently without breaching your confidentiality clause?\r\n\r\nI think I found a little bug in tokenization.py that may be related to this issue.\r\nI was facing a similar problem that using the newest version leads to a huge accuracy drop (from 88% to 22%) in a very common multi-class news title classification task. Using pytorch-pretrained-bert==0.4.0 was actually a workaround so I did a comparison of the tokenization logs of these two versions.\r\n\r\nthe main problem was that many tokens have different ids during training and evaluation. Compared to 0.4.0, the newest version has an additional function that saves the vocabulary to the output_dir/vocab.txt after training and then loads this generated vocab.txt instead during evaluation.\r\nIn my case, this generated vocab.txt differs from the original one because in https://github.com/huggingface/pytorch-pretrained-BERT/blob/3763f8944dc3fef8afb0c525a2ced8a04889c14f/pytorch_pretrained_bert/tokenization.py#L65 the tokenizer deletes all the trailing spaces. This actually strips different tokens, say a normal space and a non-break space into an identical empty token \"\". After changing this line to \"token = token.rstrip(\"¥n\") \", I was able to reproduce the expected accuracy using the newest version\r\n",
"@Ezekiel25c17 I'm a bit surprised that training spaces would be important in the vocabulary so I would like to investigate this deeper.\r\n\r\nCan you give me the reference of the following elements you were using in your tests:\r\n- the python version,\r\n- versions of pytorch-pretrained-bert\r\n- the pretrained model,\r\n- the vocabulary (probably same as the model I guess),\r\n- the example script.\r\n\r\nSo I can reproduce the behavior",
"@thomwolf \r\nyes sure,\r\n\r\n- Python 3.6.5\r\n- pytorch_pretrained_bert=0.6.2\r\n- pretrained model\r\n - [download link](http://nlp.ist.i.kyoto-u.ac.jp/DLcounter/lime.cgi?down=http://nlp.ist.i.kyoto-u.ac.jp/nl-resource/JapaneseBertPretrainedModel/Japanese_L-12_H-768_A-12_E-30_BPE.zip&name=Japanese_L-12_H-768_A-12_E-30_BPE.zip)\r\n - vocab.txt and pytorch_model.bin are contained\r\n - trained using Japanese Wikipedia\r\n- example script: run_classifier.py with a little modification to suit for a multi-class classification\r\n- also, you may need to comment out this line in tokenization.py because Japanese contains many Chinese characters\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/3763f8944dc3fef8afb0c525a2ced8a04889c14f/pytorch_pretrained_bert/tokenization.py#L235\r\n\r\nMaybe the point can be explained using the following example:\r\n\r\n- let's say we have a bert_model/vocab.txt contains only four tokens: 'a', 'b ', 'c', 'b'\r\n- then after loading it during training, vocab_train = {'a':0, 'c':2, 'b':3}\r\n- the saved output_dir/vocab.txt will be something like: 'a', 'c', 'b'\r\n- finally when loading output_dir/vocab.txt during evaluation, vocab_eval = {'a':0, 'c':1, 'b':2}\r\n",
"@Ezekiel25c17 Shuffled indices would make sense for the accuracy to drop. \r\n@thomwolf I had longer sequences before too but in pytorch-pretrained-bert==0.4.0 the statement\r\n`input_ids = pad_sequences([tokenizer.convert_tokens_to_ids(txt) for txt in tokenized_texts], maxlen=MAX_LEN, dtype=”long”, truncating=”post”, padding=”post”)` did not have a very strict implementation but in 0.6.1 it threw a Value Error which I overcame by truncating the sequences to 512 before feeding it to the \"tokenizer.convert_tokens_to_ids(txt)\" function. Either way, I was using only the first 75 tokens of the sentence (MAX_LEN=75). So it didn't matter to me. When I was re-running the same code this was the only statement that threw an error which was why I thought there must have been a change in this functionality in the update.",
"The issue is still there (current master or 1.0.0. release). Looks like 'BertForTokenClassification' is broken since 0.4.0 . With current version any trained model produces very low scores (dozens of percentage points lower than 0.4.0).",
"Sorry for misleading comment. BertForTokenClassification is fine, I just did not use the proper padding label (do not use 'O' label for padding, use a separate label, e.g. '[PAD]').",
"@IINemo if you are using an attention mask, then wouldn't the label for the padding not matter at all? ",
"Hi,\n\nIf you use “O” in versions of pytorch pretrained bert >= 0.5.0, the problem\nhappens because loss on padded tokens is ignored, then any wrong output of\nthe model on padded tokens will not be penalized and the model will learn\nwrong signal for labels “O”.\n\nThe full fixed version of the code that does sequence tagging with BERT and\nnewest version of pytorch pretrained bert is here:\nhttps://github.com/IINemo/bert_sequence_tagger\n\nThere is a class SequenceTaggerBert that works with tokenized sequences\n(e.g., nltk tokenizer) and does all the necessary preprocessing under the\nhood.\n\nBest\n\nOn Wed, Sep 11, 2019 at 9:50 AM Akash Saravanan <[email protected]>\nwrote:\n\n> @IINemo <https://github.com/IINemo> if you are using an attention mask,\n> then wouldn't the label for the padding not matter at all?\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-transformers/issues/328?email_source=notifications&email_token=AFAVG3P4WSZIAUGWKBZJDXTQJCIJRA5CNFSM4G2VYDIKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6NOZYQ#issuecomment-530246882>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AFAVG3NTSU4CCAZYWSACMPLQJCIJRANCNFSM4G2VYDIA>\n> .\n>\n",
"> Yes, BERT only accepts inputs smaller or equal to 512 tokens.\r\n\r\nHi , I wanted to trained BERT for text more than 512 tokens ,I can not truncate text to 512 as there will be loss of information in that case.Could you please help how can I handle this or any other suggestion to build customized NER for my usecase using BERT.\r\n\r\nThanks"
] | 1,551 | 1,585 | 1,563 | NONE | null | I have been using your PyTorch implementation of Google’s [BERT][1] by [HuggingFace][2] for the MADE 1.0 dataset for quite some time now. Up until last time (11-Feb), I had been using the library and getting an **F-Score** of **0.81** for my Named Entity Recognition task by Fine Tuning the model. But this week when I ran the exact same code which had compiled and run earlier, it threw an error when executing this statement:
input_ids = pad_sequences([tokenizer.convert_tokens_to_ids(txt) for txt in tokenized_texts], maxlen=MAX_LEN, dtype=”long”, truncating=”post”, padding=”post”)
> ValueError: Token indices sequence length is longer than the specified
> maximum sequence length for this BERT model (632 > 512). Running this
> sequence through BERT will result in indexing errors
The full code is available in this [colab notebook][3].
To get around this error I modified the above statement to the one below by taking the first 512 tokens of any sequence and made the necessary changes to add the index of [SEP] to the end of the truncated/padded sequence as required by BERT.
input_ids = pad_sequences([tokenizer.convert_tokens_to_ids(txt[:512]) for txt in tokenized_texts], maxlen=MAX_LEN, dtype=”long”, truncating=”post”, padding=”post”)
The result shouldn’t have changed because I am only considering the first 512 tokens in the sequence and later truncating to 75 as my (MAX_LEN=75) but my **F-Score** has dropped to **0.40** and my **precision** to **0.27** while the **Recall** remains the same **(0.85)**. I am unable to share the dataset as I have signed a confidentiality clause but I can assure all the preprocessing as required by BERT has been done and all extended tokens like (Johanson –> Johan ##son) have been tagged with X and replaced later after the prediction as said in the [BERT Paper][4].
Has anyone else faced a similar issue or can elaborate on what might be the issue or what changes the PyTorch (Huggingface) has done on their end recently?
[1]: https://github.com/google-research/bert#fine-tuning-with-bert
[2]: https://github.com/huggingface/pytorch-pretrained-BERT
[3]: https://colab.research.google.com/drive/1JxWdw1BjXZCFC2a8IwtZxvvq4rFGcxas
[4]: https://arxiv.org/abs/1810.04805 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/328/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/327 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/327/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/327/comments | https://api.github.com/repos/huggingface/transformers/issues/327/events | https://github.com/huggingface/transformers/pull/327 | 415,258,178 | MDExOlB1bGxSZXF1ZXN0MjU2Nzg1NjI3 | 327 | Issue#324: warmup linear fixes | {
"login": "lukovnikov",
"id": 1732910,
"node_id": "MDQ6VXNlcjE3MzI5MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1732910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukovnikov",
"html_url": "https://github.com/lukovnikov",
"followers_url": "https://api.github.com/users/lukovnikov/followers",
"following_url": "https://api.github.com/users/lukovnikov/following{/other_user}",
"gists_url": "https://api.github.com/users/lukovnikov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukovnikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukovnikov/subscriptions",
"organizations_url": "https://api.github.com/users/lukovnikov/orgs",
"repos_url": "https://api.github.com/users/lukovnikov/repos",
"events_url": "https://api.github.com/users/lukovnikov/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukovnikov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, thanks @lukovnikov!"
] | 1,551 | 1,551 | 1,551 | CONTRIBUTOR | null | Fixes for [Issue#324](https://github.com/huggingface/pytorch-pretrained-BERT/issues/324).
- Using the same schedule functions in BertAdam and OpenAIAdam, fixing `warmup_linear` of OpenAIAdam
- fix for negative learning rate after t_total for `warmup_linear`
- some more docstrings
- warning when t_total is exceeded with `warmup_linear`, implemented inside `.step()` of the optimizer (maybe not that nice). Warning is printed on every batch update. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/327/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/327",
"html_url": "https://github.com/huggingface/transformers/pull/327",
"diff_url": "https://github.com/huggingface/transformers/pull/327.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/327.patch",
"merged_at": 1551861897000
} |
https://api.github.com/repos/huggingface/transformers/issues/326 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/326/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/326/comments | https://api.github.com/repos/huggingface/transformers/issues/326/events | https://github.com/huggingface/transformers/issues/326 | 414,938,885 | MDU6SXNzdWU0MTQ5Mzg4ODU= | 326 | run_classifier with evaluation job only | {
"login": "PaulZhangIsing",
"id": 27721543,
"node_id": "MDQ6VXNlcjI3NzIxNTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/27721543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulZhangIsing",
"html_url": "https://github.com/PaulZhangIsing",
"followers_url": "https://api.github.com/users/PaulZhangIsing/followers",
"following_url": "https://api.github.com/users/PaulZhangIsing/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulZhangIsing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulZhangIsing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulZhangIsing/subscriptions",
"organizations_url": "https://api.github.com/users/PaulZhangIsing/orgs",
"repos_url": "https://api.github.com/users/PaulZhangIsing/repos",
"events_url": "https://api.github.com/users/PaulZhangIsing/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulZhangIsing/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
" evalonly works last ver, need to mv some code out of train...",
"> evalonly works last ver, need to mv some code out of train...\r\n\r\nyup I agree. I shall do the eval loss and do it\r\n",
"Seems fixed in master, right? Feel free to re-open the issue if it's not the case."
] | 1,551 | 1,551 | 1,551 | NONE | null | Thanks for giving such awesome project.
However, I have encountered some problem.
After training the model, I just want to do eval on another dataset with the trained model. Therefore I only open do_eval.
However, it gives me this error:
Traceback (most recent call last):
File "run_classifier_torch.py", line 687, in <module>
main()
File "run_classifier_torch.py", line 677, in main
'loss': tr_loss/nb_tr_steps}
UnboundLocalError: local variable 'tr_loss' referenced before assignment
It seems that the loss and tr_loss are only declared in train. If we neglect the training process, the error shall pop up.
solution : using eval_loss instead | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/326/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/325 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/325/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/325/comments | https://api.github.com/repos/huggingface/transformers/issues/325/events | https://github.com/huggingface/transformers/pull/325 | 414,937,998 | MDExOlB1bGxSZXF1ZXN0MjU2NTM0ODg4 | 325 | add BertTokenizer flag to skip basic tokenization | {
"login": "john-hewitt",
"id": 8755768,
"node_id": "MDQ6VXNlcjg3NTU3Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8755768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/john-hewitt",
"html_url": "https://github.com/john-hewitt",
"followers_url": "https://api.github.com/users/john-hewitt/followers",
"following_url": "https://api.github.com/users/john-hewitt/following{/other_user}",
"gists_url": "https://api.github.com/users/john-hewitt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/john-hewitt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/john-hewitt/subscriptions",
"organizations_url": "https://api.github.com/users/john-hewitt/orgs",
"repos_url": "https://api.github.com/users/john-hewitt/repos",
"events_url": "https://api.github.com/users/john-hewitt/events{/privacy}",
"received_events_url": "https://api.github.com/users/john-hewitt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the PR (and documenting this), I added a note",
"Ok this is great, thanks @john-hewitt, thanks!"
] | 1,551 | 1,553 | 1,551 | CONTRIBUTOR | null | When tokenization is done before text hits this package (e.g., when tokenization is specified as part of the dataset) there exists a use case for skipping the `BasicTokenizer` step, going right to `WordpieceTokenizer`.
When one still wants to use the `BertTokenizer.from_pretrained` helper function, they have been able to do this (without claiming this is necessarily the best way) by
```
text = "[CLS] `` Truly , pizza is delicious , '' said Mx. Caily."
bert_tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
tokenized_text = bert_tokenizer.wordpiece_tokenizer.tokenize(text)
```
With this PR, we instead use
```
text = "[CLS] `` Truly , pizza is delicious , '' said Mx. Caily."
bert_tokenizer = BertTokenizer.from_pretrained('bert-base-cased', do_basic_tokenize=False)
tokenized_text = bert_tokenizer.tokenize(text)
```
a flag for which I add documentation in the docstring and README, hopefully making it clear that this is possible.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/325/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/325",
"html_url": "https://github.com/huggingface/transformers/pull/325",
"diff_url": "https://github.com/huggingface/transformers/pull/325.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/325.patch",
"merged_at": 1551861432000
} |
https://api.github.com/repos/huggingface/transformers/issues/324 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/324/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/324/comments | https://api.github.com/repos/huggingface/transformers/issues/324/events | https://github.com/huggingface/transformers/issues/324 | 414,694,497 | MDU6SXNzdWU0MTQ2OTQ0OTc= | 324 | warmup_linear for BertAdam and OpenAIAdam | {
"login": "lukovnikov",
"id": 1732910,
"node_id": "MDQ6VXNlcjE3MzI5MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1732910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukovnikov",
"html_url": "https://github.com/lukovnikov",
"followers_url": "https://api.github.com/users/lukovnikov/followers",
"following_url": "https://api.github.com/users/lukovnikov/following{/other_user}",
"gists_url": "https://api.github.com/users/lukovnikov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukovnikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukovnikov/subscriptions",
"organizations_url": "https://api.github.com/users/lukovnikov/orgs",
"repos_url": "https://api.github.com/users/lukovnikov/repos",
"events_url": "https://api.github.com/users/lukovnikov/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukovnikov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I just ran into this problem while running BERT on large samples from `run_squad.py`. I think a fix would be welcome because this is a really disturbing and hard to catch issue. \r\n\r\nIt would probably be enough to move the optimizer creation + computing of `num_train_optimization_steps` inside the train loop.",
"Happy to welcome a PR on this indeed.\r\nI'm not super fan of silently hide a wrong `t_total` by setting `lr` to zero so maybe sending a warning `logger. warning` at the same time would be nice too.",
"made a PR: https://github.com/huggingface/pytorch-pretrained-BERT/pull/327",
"Fixed in master now, thanks @lukovnikov!"
] | 1,551 | 1,567 | 1,551 | CONTRIBUTOR | null | 1. OpenAIAdam version of `warmup_linear` does not linearly increase lr, instead it looks like this:

This is different from BertAdam version of `warmup_linear`. Should they not be the same (Bert version)?
2. if `t_total` is specified incorrectly (too small), learning rate becomes negative after t_total for both versions. Probably it would be better to set lr to 0 to avoid situations like [Issue#297](https://github.com/huggingface/pytorch-pretrained-BERT/issues/297). Also, with a too small `t_total`, there is a drop in lr right after `warmup` is reached:

Let me know if I should PR a fix for both.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/324/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/324/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/323 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/323/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/323/comments | https://api.github.com/repos/huggingface/transformers/issues/323/events | https://github.com/huggingface/transformers/issues/323 | 414,690,179 | MDU6SXNzdWU0MTQ2OTAxNzk= | 323 | What should be the label of sub-word units in Token Classification with Bert | {
"login": "ereday",
"id": 13196191,
"node_id": "MDQ6VXNlcjEzMTk2MTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/13196191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ereday",
"html_url": "https://github.com/ereday",
"followers_url": "https://api.github.com/users/ereday/followers",
"following_url": "https://api.github.com/users/ereday/following{/other_user}",
"gists_url": "https://api.github.com/users/ereday/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ereday/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ereday/subscriptions",
"organizations_url": "https://api.github.com/users/ereday/orgs",
"repos_url": "https://api.github.com/users/ereday/repos",
"events_url": "https://api.github.com/users/ereday/events{/privacy}",
"received_events_url": "https://api.github.com/users/ereday/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have a similar problem. I labeled the tokens as \"X\" and then got an error relating to NUM_LABELS. BERT appears to have thought the X was a third label, and I only specified there to be two labels.",
"You do not need to introduce an additional tag. This is explained here:\r\n\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/issues/64#issuecomment-443703063",
"Yes, I've left #64 open to discuss all these questions. Feel free to read the discussion there and ask questions if needed. Closing this issue.",
"@ereday AFAIK \r\nTo answer your question \"How the sub-tokens could be masked during training & testing\"\r\nThere is no need of masking. The sub-word token_ids (except for the first) are not fed to the BERT model.\r\nPlease tell me if i am wrong."
] | 1,551 | 1,626 | 1,551 | NONE | null | Hi,
I'm trying to use BERT for a token-level tagging problem such as NER in German.
This is what I've done so far for input preparation:
```
from pytorch_pretrained_bert.tokenization import BertTokenizer, WordpieceTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-multilingual-cased", do_lower_case=False)
sentences= ["Bis 2013 steigen die Mittel aus dem EU-Budget auf rund 120 Millionen Euro ."]
labels = [["O","O","O","O","O","O","O","B-ORGpart","O","O","O","O","B-OTH","O"]]
tokens = tokenizer.tokenize(sentences[0])
```
When I check the tokens I see that there are now 18 tokens instead of 14 (as expected) because of the sub-word units.
```
>>> tokens
['Bis', '2013', 'st', '##eig', '##en', 'die', 'Mittel', 'aus', 'dem', 'EU', '##-', '##B', '##ud', '##get', 'auf', 'rund', '120', 'Millionen', 'Euro', '.']
```
My question is that how should I modify the labels array. Should I label each sub-word unit with the label of the original word or should I do something else ? As second question, which one of the examples in the resository can be used as an example code for this purpose ? `run_classifier.py` ? `run_squad.py `?
**UPDATE**
OK, according to the paper it should be handled as follows (From Section 4.3 of BERT paper):
> To make this compatible with WordPiece tokenization, we feed each CoNLL-tokenized
> input word into our WordPiece tokenizer and use the hidden state corresponding to the first
> sub-token as input to the classifier. Where no prediction is made for X. Since
> the WordPiece tokenization boundaries are a known part of the input, this is done for both
> training and test.
Then, for the above example , the correct input output pair is :
```
['Bis', '2013', 'st', '##eig', '##en', 'die', 'Mittel', 'aus', 'dem', 'EU', '##-', '##B', '##ud', '##get', 'auf', 'rund', '120', 'Millionen', 'Euro', '.']
['O', 'O', 'O', 'X', 'X', 'O', 'O', 'O', 'O', 'B-ORGpart', 'X', 'X', 'X', 'X', 'O', 'O', 'O', 'O', 'B-OTH', 'O']
```
Then my question is evolved to " How the sub-tokens could be masked during training & testing ?" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/323/reactions",
"total_count": 14,
"+1": 14,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/323/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/322 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/322/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/322/comments | https://api.github.com/repos/huggingface/transformers/issues/322/events | https://github.com/huggingface/transformers/issues/322 | 414,596,654 | MDU6SXNzdWU0MTQ1OTY2NTQ= | 322 | Single sentence corpus in run_lm_finetuning? | {
"login": "vebits",
"id": 9068991,
"node_id": "MDQ6VXNlcjkwNjg5OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9068991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vebits",
"html_url": "https://github.com/vebits",
"followers_url": "https://api.github.com/users/vebits/followers",
"following_url": "https://api.github.com/users/vebits/following{/other_user}",
"gists_url": "https://api.github.com/users/vebits/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vebits/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vebits/subscriptions",
"organizations_url": "https://api.github.com/users/vebits/orgs",
"repos_url": "https://api.github.com/users/vebits/repos",
"events_url": "https://api.github.com/users/vebits/events{/privacy}",
"received_events_url": "https://api.github.com/users/vebits/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"https://github.com/huggingface/pytorch-pretrained-BERT/issues/272\r\n\r\nI had the same issue and but apparently this cant be done in BERT",
"Yes, can't be done currently. Feel free to submit a PR to extend the `run_lm_finetuning` example @vebits!"
] | 1,551 | 1,551 | 1,551 | NONE | null | Hi,
I am trying to pre-train using `BertForPreTraning` in `run_lm_finetuning.py`. My target corpus is based on very many tweets and I am unsure how the model will tackle that since they are mostly only one sentence. Will it affect the IsNextSentence task?
Should my .txt input file consist of one tweet on each line where each tweet is seperated by an empty line? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/322/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/321 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/321/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/321/comments | https://api.github.com/repos/huggingface/transformers/issues/321/events | https://github.com/huggingface/transformers/issues/321 | 414,583,129 | MDU6SXNzdWU0MTQ1ODMxMjk= | 321 | how to load classification model and predict? | {
"login": "Jasperty",
"id": 37020799,
"node_id": "MDQ6VXNlcjM3MDIwNzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/37020799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jasperty",
"html_url": "https://github.com/Jasperty",
"followers_url": "https://api.github.com/users/Jasperty/followers",
"following_url": "https://api.github.com/users/Jasperty/following{/other_user}",
"gists_url": "https://api.github.com/users/Jasperty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jasperty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jasperty/subscriptions",
"organizations_url": "https://api.github.com/users/Jasperty/orgs",
"repos_url": "https://api.github.com/users/Jasperty/repos",
"events_url": "https://api.github.com/users/Jasperty/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jasperty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Should work. Without more information I can't really help you."
] | 1,551 | 1,551 | 1,551 | NONE | null | i use my output dir as bert_model, but cannot find the model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/321/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/320 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/320/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/320/comments | https://api.github.com/repos/huggingface/transformers/issues/320/events | https://github.com/huggingface/transformers/issues/320 | 414,497,924 | MDU6SXNzdWU0MTQ0OTc5MjQ= | 320 | what is the batch size we can use for SQUAD task? | {
"login": "leonwyang",
"id": 32276166,
"node_id": "MDQ6VXNlcjMyMjc2MTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/32276166?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leonwyang",
"html_url": "https://github.com/leonwyang",
"followers_url": "https://api.github.com/users/leonwyang/followers",
"following_url": "https://api.github.com/users/leonwyang/following{/other_user}",
"gists_url": "https://api.github.com/users/leonwyang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leonwyang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leonwyang/subscriptions",
"organizations_url": "https://api.github.com/users/leonwyang/orgs",
"repos_url": "https://api.github.com/users/leonwyang/repos",
"events_url": "https://api.github.com/users/leonwyang/events{/privacy}",
"received_events_url": "https://api.github.com/users/leonwyang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"solved"
] | 1,551 | 1,551 | 1,551 | NONE | null | I am running the squad example.
I have a Tesla M60 GPU which has about 8GB of memory. For bert-large-uncased model, I can only take batch size as 2, even after I used --fp16. Is it normal?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/320/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/319 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/319/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/319/comments | https://api.github.com/repos/huggingface/transformers/issues/319/events | https://github.com/huggingface/transformers/issues/319 | 414,366,217 | MDU6SXNzdWU0MTQzNjYyMTc= | 319 | run_classifier.py: TypeError: __init__() got an unexpected keyword argument 'cache_dir' | {
"login": "VarnithChordia",
"id": 16621441,
"node_id": "MDQ6VXNlcjE2NjIxNDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/16621441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VarnithChordia",
"html_url": "https://github.com/VarnithChordia",
"followers_url": "https://api.github.com/users/VarnithChordia/followers",
"following_url": "https://api.github.com/users/VarnithChordia/following{/other_user}",
"gists_url": "https://api.github.com/users/VarnithChordia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VarnithChordia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VarnithChordia/subscriptions",
"organizations_url": "https://api.github.com/users/VarnithChordia/orgs",
"repos_url": "https://api.github.com/users/VarnithChordia/repos",
"events_url": "https://api.github.com/users/VarnithChordia/events{/privacy}",
"received_events_url": "https://api.github.com/users/VarnithChordia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @VarnithChordia \r\n\r\nWere you able to fix this issue? Because I run into a similar issue.\r\n\r\nI would appreciate any guidance on this issue. \r\n\r\nThank you.\r\n",
"same. bump. Latest master does not handle cache_dir or mode ",
"@PetreanuAndi You are correct that in the latest master this issue occurs. The way I was able to fix the code was the following:\r\n\r\nin the `run_glue.py` file, change lines 137-149:\r\n\r\n``` \r\ntrain_dataset = (\r\n GlueDataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None\r\n )\r\n eval_dataset = (\r\n GlueDataset(data_args, tokenizer=tokenizer, mode=\"dev\", cache_dir=model_args.cache_dir)\r\n if training_args.do_eval\r\n else None\r\n )\r\n\r\n test_dataset = (\r\n GlueDataset(data_args, tokenizer=tokenizer, mode=\"test\", cache_dir=model_args.cache_dir)\r\n if training_args.do_predict\r\n else None\r\n )\r\n\r\n```\r\nTo:\r\n\r\n```\r\ntrain_dataset = (\r\n GlueDataset(data_args, tokenizer=tokenizer) if training_args.do_train else None\r\n )\r\n eval_dataset = (\r\n GlueDataset(data_args, tokenizer=tokenizer, mode=\"dev\")\r\n if training_args.do_eval\r\n else None\r\n )\r\n test_dataset = (\r\n GlueDataset(data_args, tokenizer=tokenizer, mode=\"test\")\r\n if training_args.do_predict\r\n else None\r\n )\r\n```\r\n\r\nHope this helps.\r\n\r\n",
"Hi! That's weird, the `cache_dir` argument is available on the `GlueDataset`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/f45e873910e60d89511ae0193711e71c5c710468/src/transformers/data/datasets/glue.py#L58-L75\r\n\r\nIs it possible you haven't pulled the new changes in a while? If you get an error, could you please open a new issue with all the information relative to your environment, the command you ran and the stack trace? Thanks a lot!"
] | 1,551 | 1,594 | 1,551 | NONE | null | python3 run_classifier.py
--task_name MRPC
--do_train
--do_eval
--do_lower_case
--data_dir $GLUE_DIR/MRPC/
--bert_model bert-base-uncased
--max_seq_length 128
--train_batch_size 32
--learning_rate 2e-5
--num_train_epochs 3.0
--output_dir /tmp/mrpc_outputunexpected
02/25/2019 15:50:51 - INFO - __main__ - device: cuda n_gpu: 10, distributed training: False, 16-bits training: False
02/25/2019 15:50:51 - INFO - pytorch_pretrained_bert.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt not found in cache, downloading to /tmp/tmp30rk0ety
02/25/2019 15:50:52 - INFO - pytorch_pretrained_bert.file_utils - copying /tmp/tmp30rk0ety to cache at ..pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
02/25/2019 15:50:52 - INFO - pytorch_pretrained_bert.file_utils - creating metadata file for /tilde/vchordia/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
02/25/2019 15:50:52 - INFO - pytorch_pretrained_bert.file_utils - removing temp file /tmp/tmp30rk0ety
02/25/2019 15:50:52 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at ./.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
02/25/2019 15:50:52 - INFO - __main__ - LOOKING AT ./glue_data/MRPC/train.tsv
02/25/2019 15:51:39 - INFO - pytorch_pretrained_bert.file_utils - creating metadata file for ./.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba
02/25/2019 15:51:39 - INFO - pytorch_pretrained_bert.file_utils - removing temp file /tmp/tmpacv7p93x
02/25/2019 15:51:39 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at ./.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba
02/25/2019 15:51:39 - INFO - pytorch_pretrained_bert.modeling - extracting archive file ./.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmpsrcv3o3c
02/25/2019 15:51:44 - INFO - pytorch_pretrained_bert.modeling - Model config {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 30522
}
Traceback (most recent call last):
File "run_classifier.py", line 637, in <module>
main()
File "run_classifier.py", line 468, in main
num_labels = num_labels)
File "./anaconda3/envs/py_deep/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 502, in from_pretrained
model = cls(config, *inputs, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'cache_dir'
My issue is that I have been trying to run this test case as suggested in the readme to try test the classifier. I am not sure why the class method is not accepting cache_dir argument. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/319/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/318 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/318/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/318/comments | https://api.github.com/repos/huggingface/transformers/issues/318/events | https://github.com/huggingface/transformers/issues/318 | 413,789,252 | MDU6SXNzdWU0MTM3ODkyNTI= | 318 | TransfoXLLMHeadModel output interpretation | {
"login": "bergen",
"id": 886963,
"node_id": "MDQ6VXNlcjg4Njk2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/886963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bergen",
"html_url": "https://github.com/bergen",
"followers_url": "https://api.github.com/users/bergen/followers",
"following_url": "https://api.github.com/users/bergen/following{/other_user}",
"gists_url": "https://api.github.com/users/bergen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bergen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bergen/subscriptions",
"organizations_url": "https://api.github.com/users/bergen/orgs",
"repos_url": "https://api.github.com/users/bergen/repos",
"events_url": "https://api.github.com/users/bergen/events{/privacy}",
"received_events_url": "https://api.github.com/users/bergen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n1/ it's the usual language modeling probabilities: each token probability given the previous tokens\r\n2/ thanks, fixed."
] | 1,550 | 1,551 | 1,551 | NONE | null | TransfoXLLMHeadModel gives an output of log probabilities of shape [batch_size, sequence_length, n_tokens]. What do these probabilities represent? For example, what distribution is output at the first sequence position? Is it the conditional distribution given the first word? If so, how can the probability of a complete sentence be computed, including the first word?
Also, the readme states:
> softmax_output: output of the (adaptive) softmax:
>if target is None: Negative log likelihood of shape [batch_size, sequence_length]
This appears to be incorrect. From current behavior, it should say: if target is **not** None | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/318/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/317 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/317/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/317/comments | https://api.github.com/repos/huggingface/transformers/issues/317/events | https://github.com/huggingface/transformers/issues/317 | 413,719,230 | MDU6SXNzdWU0MTM3MTkyMzA= | 317 | anyone notice large difference of using fp16 ? | {
"login": "howardhsu",
"id": 10661375,
"node_id": "MDQ6VXNlcjEwNjYxMzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/10661375?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/howardhsu",
"html_url": "https://github.com/howardhsu",
"followers_url": "https://api.github.com/users/howardhsu/followers",
"following_url": "https://api.github.com/users/howardhsu/following{/other_user}",
"gists_url": "https://api.github.com/users/howardhsu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/howardhsu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/howardhsu/subscriptions",
"organizations_url": "https://api.github.com/users/howardhsu/orgs",
"repos_url": "https://api.github.com/users/howardhsu/repos",
"events_url": "https://api.github.com/users/howardhsu/events{/privacy}",
"received_events_url": "https://api.github.com/users/howardhsu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,550 | 1,550 | 1,550 | CONTRIBUTOR | null | I recently noticed that using fp16 dropped the performance of BERT on my own dataset but improved on another (it works fine on examples like MPRC). It's about 4% so unlikely to be random noise.
I'm trying to see the reason and noticed examples from apex:
https://github.com/NVIDIA/apex/tree/master/examples
actually uses a global copy of the parameters during training but examples in this repository just use fp16 for all steps (and for saving parameters.) Wondering will this be the potential reason? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/317/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/316 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/316/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/316/comments | https://api.github.com/repos/huggingface/transformers/issues/316/events | https://github.com/huggingface/transformers/pull/316 | 413,621,140 | MDExOlB1bGxSZXF1ZXN0MjU1NTY2MTU5 | 316 | update documentation for gpt-2 | {
"login": "joelgrus",
"id": 1308313,
"node_id": "MDQ6VXNlcjEzMDgzMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1308313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joelgrus",
"html_url": "https://github.com/joelgrus",
"followers_url": "https://api.github.com/users/joelgrus/followers",
"following_url": "https://api.github.com/users/joelgrus/following{/other_user}",
"gists_url": "https://api.github.com/users/joelgrus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joelgrus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joelgrus/subscriptions",
"organizations_url": "https://api.github.com/users/joelgrus/orgs",
"repos_url": "https://api.github.com/users/joelgrus/repos",
"events_url": "https://api.github.com/users/joelgrus/events{/privacy}",
"received_events_url": "https://api.github.com/users/joelgrus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @joelgrus, you are right, the docstring were lagging a lot.\r\nAll the information is in the `README.py`, more specifically [these sections detailing the API of the GPT2 models](https://github.com/huggingface/pytorch-pretrained-BERT#14-gpt2model) but I forgot to update the docstrings. Do you want to have a look and tell me if it's detailed enough for the docstrings also?",
"yes, that's great, I made those changes. (I also feel kind of dumb for not looking at the docs docs.)\r\n\r\nsorry about the trailing whitespace changes in the README.md, my editor removes those automatically.",
"Thanks Joel!"
] | 1,550 | 1,550 | 1,550 | CONTRIBUTOR | null | fixes a few incorrect details in the gpt-2 documentation.
one remaining thing, all of the models return an extra `presents` variable that I'm not quite sure what it is, so there's a ... in the doc. if you tell me what to put there I can put it there, or you can do it yourself. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/316/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/316",
"html_url": "https://github.com/huggingface/transformers/pull/316",
"diff_url": "https://github.com/huggingface/transformers/pull/316.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/316.patch",
"merged_at": 1550997510000
} |
https://api.github.com/repos/huggingface/transformers/issues/315 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/315/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/315/comments | https://api.github.com/repos/huggingface/transformers/issues/315/events | https://github.com/huggingface/transformers/issues/315 | 413,590,083 | MDU6SXNzdWU0MTM1OTAwODM= | 315 | run_classifier.py : TypeError: join() argument must be str or bytes, not 'PosixPath' | {
"login": "WilliamTambellini",
"id": 109458,
"node_id": "MDQ6VXNlcjEwOTQ1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/109458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WilliamTambellini",
"html_url": "https://github.com/WilliamTambellini",
"followers_url": "https://api.github.com/users/WilliamTambellini/followers",
"following_url": "https://api.github.com/users/WilliamTambellini/following{/other_user}",
"gists_url": "https://api.github.com/users/WilliamTambellini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WilliamTambellini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WilliamTambellini/subscriptions",
"organizations_url": "https://api.github.com/users/WilliamTambellini/orgs",
"repos_url": "https://api.github.com/users/WilliamTambellini/repos",
"events_url": "https://api.github.com/users/WilliamTambellini/events{/privacy}",
"received_events_url": "https://api.github.com/users/WilliamTambellini/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/file_utils.py#L30\r\n`PYTORCH_PRETRAINED_BERT_CACHE = Path(os.getenv('PYTORCH_PRETRAINED_BERT_CACHE',\r\n Path.home() / '.pytorch_pretrained_bert'))` --> \r\n`PYTORCH_PRETRAINED_BERT_CACHE = str(Path(os.getenv('PYTORCH_PRETRAINED_BERT_CACHE',\r\n Path.home() / '.pytorch_pretrained_bert')))` would solve this, I don't know if there are any side effects.\r\n\r\nMaybe, a test should be added here?",
"The above does not resolve the err",
"Let's rather keep the library's internal using `Path` and fix the examples by adding `str` there instead.\r\nFixed on master now."
] | 1,550 | 1,551 | 1,551 | CONTRIBUTOR | null | when trying the MRPC example :
python3.5 run_classifier.py \
--task_name MRPC \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/MRPC/ \
--bert_model bert-base-uncased \
--max_seq_length 128 \
--train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/mrpc_output
Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.
02/22/2019 21:29:48 - INFO - __main__ - device: cuda n_gpu: 1, distributed training: False, 16-bits training: False
02/22/2019 21:29:48 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /home/ubuntu/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
02/22/2019 21:29:48 - INFO - __main__ - LOOKING AT /home/ubuntu/glue_data/MRPC/train.tsv
Traceback (most recent call last):
File "pytorch-pretrained-BERT/examples/run_classifier.py", line 637, in <module>
main()
File "pytorch-pretrained-BERT/examples/run_classifier.py", line 465, in main
cache_dir = args.cache_dir if args.cache_dir else os.path.join(PYTORCH_PRETRAINED_BERT_CACHE, 'distributed_{}'.format(args.local_rank))
File "/usr/lib/python3.5/posixpath.py", line 89, in join
genericpath._check_arg_types('join', a, *p)
File "/usr/lib/python3.5/genericpath.py", line 143, in _check_arg_types
(funcname, s.__class__.__name__)) from None
TypeError: join() argument must be str or bytes, not 'PosixPath'
ubuntu 16
python 3.5
torch 1.0.1
Collecting torch>=0.4.1 (from pytorch-pretrained-bert)
Downloading https://files.pythonhosted.org/packages/59/d2/4e806f73b4b72daab9064c99394fc22ea6ef1fb052154546405057cd192d/torch-1.0.1.post2-cp35-cp35m-manylinux1_x86_64.whl (582.5MB)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/315/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/314 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/314/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/314/comments | https://api.github.com/repos/huggingface/transformers/issues/314/events | https://github.com/huggingface/transformers/issues/314 | 413,272,916 | MDU6SXNzdWU0MTMyNzI5MTY= | 314 | Issue with apex import on MAC | {
"login": "bhoomit",
"id": 1269954,
"node_id": "MDQ6VXNlcjEyNjk5NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1269954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhoomit",
"html_url": "https://github.com/bhoomit",
"followers_url": "https://api.github.com/users/bhoomit/followers",
"following_url": "https://api.github.com/users/bhoomit/following{/other_user}",
"gists_url": "https://api.github.com/users/bhoomit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhoomit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhoomit/subscriptions",
"organizations_url": "https://api.github.com/users/bhoomit/orgs",
"repos_url": "https://api.github.com/users/bhoomit/repos",
"events_url": "https://api.github.com/users/bhoomit/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhoomit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Unfortunately, apex (and fp16 in general) only work on GPU. So you can't use it on MacOS :/"
] | 1,550 | 1,550 | 1,550 | NONE | null | Python 3.7
MacOS High Sierra 10.13.6
```
Traceback (most recent call last):
File "examples/classifier.py", line 1, in <module>
from pytorch_pretrained_bert.tokenization import BertTokenizer, WordpieceTokenizer
File "/Users/Bhoomit/work/robin/nlp/pytorch-pretrained-BERT/env/lib/python3.7/site-packages/pytorch_pretrained_bert/__init__.py", line 7, in <module>
from .modeling import (BertConfig, BertModel, BertForPreTraining,
File "/Users/Bhoomit/work/robin/nlp/pytorch-pretrained-BERT/env/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 218, in <module>
from apex.normalization.fused_layer_norm import FusedLayerNorm as BertLayerNorm
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 668, in _load_unlocked
File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible
File "/Users/Bhoomit/work/robin/nlp/pytorch-pretrained-BERT/env/lib/python3.7/site-packages/apex-0.1-py3.7.egg/apex/__init__.py", line 12, in <module>
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 668, in _load_unlocked
File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible
File "/Users/Bhoomit/work/robin/nlp/pytorch-pretrained-BERT/env/lib/python3.7/site-packages/apex-0.1-py3.7.egg/apex/optimizers/__init__.py", line 2, in <module>
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 668, in _load_unlocked
File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible
File "/Users/Bhoomit/work/robin/nlp/pytorch-pretrained-BERT/env/lib/python3.7/site-packages/apex-0.1-py3.7.egg/apex/optimizers/fp16_optimizer.py", line 8, in <module>
File "/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ctypes/__init__.py", line 369, in __getattr__
func = self.__getitem__(name)
File "/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ctypes/__init__.py", line 374, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: dlsym(RTLD_DEFAULT, THCudaHalfTensor_normall): symbol not found
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/314/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/313 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/313/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/313/comments | https://api.github.com/repos/huggingface/transformers/issues/313/events | https://github.com/huggingface/transformers/issues/313 | 413,241,264 | MDU6SXNzdWU0MTMyNDEyNjQ= | 313 | run_lm_finetuning | {
"login": "Shi-Linqing-Jason",
"id": 40857896,
"node_id": "MDQ6VXNlcjQwODU3ODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/40857896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shi-Linqing-Jason",
"html_url": "https://github.com/Shi-Linqing-Jason",
"followers_url": "https://api.github.com/users/Shi-Linqing-Jason/followers",
"following_url": "https://api.github.com/users/Shi-Linqing-Jason/following{/other_user}",
"gists_url": "https://api.github.com/users/Shi-Linqing-Jason/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shi-Linqing-Jason/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shi-Linqing-Jason/subscriptions",
"organizations_url": "https://api.github.com/users/Shi-Linqing-Jason/orgs",
"repos_url": "https://api.github.com/users/Shi-Linqing-Jason/repos",
"events_url": "https://api.github.com/users/Shi-Linqing-Jason/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shi-Linqing-Jason/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"What tr_loss are you exactly printing here? Is it possible that you just print this one here? https://github.com/huggingface/pytorch-pretrained-BERT/blob/2152bfeae82439600dc5b5deab057a3c4331c62d/examples/run_lm_finetuning.py#L600 If yes, you should divide it by the number of training steps (nb_tr_steps) first to get your average train loss. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,550 | 1,557 | 1,557 | NONE | null | When I run_lm_finetuning with the exemplary training corpus (small_wiki_sentence_corpus.txt), I printed the tr_loss every 20 steps. I found that the tr_loss increases very fast. I wonder what the reason is.


| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/313/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/312 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/312/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/312/comments | https://api.github.com/repos/huggingface/transformers/issues/312/events | https://github.com/huggingface/transformers/issues/312 | 413,204,487 | MDU6SXNzdWU0MTMyMDQ0ODc= | 312 | Problems converting TF BioBERT model to PyTorch | {
"login": "jwhite2a",
"id": 20924193,
"node_id": "MDQ6VXNlcjIwOTI0MTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20924193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jwhite2a",
"html_url": "https://github.com/jwhite2a",
"followers_url": "https://api.github.com/users/jwhite2a/followers",
"following_url": "https://api.github.com/users/jwhite2a/following{/other_user}",
"gists_url": "https://api.github.com/users/jwhite2a/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jwhite2a/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jwhite2a/subscriptions",
"organizations_url": "https://api.github.com/users/jwhite2a/orgs",
"repos_url": "https://api.github.com/users/jwhite2a/repos",
"events_url": "https://api.github.com/users/jwhite2a/events{/privacy}",
"received_events_url": "https://api.github.com/users/jwhite2a/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have solved my issue. All my code was correctly written. The error was an corrupted/improperly saved model.bin file.",
"I'm trying to convert BioBert to Pytorch also, so just wondering if you could share a bit more details on how you are doing the conversion. Thanks!",
"First, I downloaded the BioBERT TF checkpoints [here](https://github.com/naver/biobert-pretrained). Each model (i.e. biobert_pmc) should have three `.ckpt` files, a `vocab.txt` file, and a `bert_config.json` file. \r\n\r\nInitially, I tried to use the command line interface `pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch` using the `bert_config.json` and `.ckpt` files seen above. I ran into `AttributeError: 'Parameter' object has no attribute 'BERTAdam'`. I followed the solution [here](https://github.com/dmis-lab/biobert/issues/2). \r\n\r\nTo do this, I copied the `convert_tf_checkpoint_to_pytorch.py` file and the `load_tf_weights_in_bert` function found in `modeling.py`. I then added the two lines seen in the [solution above](https://github.com/dmis-lab/biobert/issues/2) in my own version of the function and file. \r\n\r\nGiven correct file paths, this worked to convert all three BioBERT checkpoints into pytorch `.bin` files.",
"@jwhite2a Thank you! This worked for me."
] | 1,550 | 1,553 | 1,551 | NONE | null | My goal is to convert and train on the [BioBERT pretrained checkpoints](https://github.com/naver/biobert-pretrained) in pytorch and train on the [SQuAD v2.0 Dataset](https://rajpurkar.github.io/SQuAD-explorer/).
I have (seemingly) successfully transfered the checkpoint using the `./pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py` [script](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py).
I loaded the converted checkpoint into the `run_squad.py` [example](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py). I also changed the tokenizer to use the vocab file found with the BioBERT model. At this point, I was able to train the model and observe the loss to decrease.
My first issue appeared when trying to write the SQuAD predictions. `best_non_null_entry.start_logit` did not have a `start_logit` because the `best_non_null_entry` was `NoneType`. This error resembles the previous issue #207. I implemented the solution found and my code was able to run.
My results from training have been the same or worse than a random model. Nearly all of the SQuAD predictions are the "empty" string text from the fix of #207.
**I believe the original cause of the `NoneType` error for `best_non_null_entry` is the reason for the failure to predict anything.**
Are there specs to obey when converting a TF pretrained BERT model?
What would cause the `NoneType` error for `best_non_null_entry`?
Any and all help is appreciated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/312/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/311 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/311/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/311/comments | https://api.github.com/repos/huggingface/transformers/issues/311/events | https://github.com/huggingface/transformers/issues/311 | 412,870,208 | MDU6SXNzdWU0MTI4NzAyMDg= | 311 | Shouldn't GPT2 use Linear instead of Conv1D? | {
"login": "spolu",
"id": 15067,
"node_id": "MDQ6VXNlcjE1MDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/15067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spolu",
"html_url": "https://github.com/spolu",
"followers_url": "https://api.github.com/users/spolu/followers",
"following_url": "https://api.github.com/users/spolu/following{/other_user}",
"gists_url": "https://api.github.com/users/spolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spolu/subscriptions",
"organizations_url": "https://api.github.com/users/spolu/orgs",
"repos_url": "https://api.github.com/users/spolu/repos",
"events_url": "https://api.github.com/users/spolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/spolu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe this would break pre-trained weights loading? Interested to understand if that's the only reason?",
"Possibility, feel free to test the modification and submit a PR @spolu!",
"Hi guys,\r\nI also wondered whether anyone modified the gpt2 model to have nn.Linear instead of Conv1D layers (using the pre-trained weights).\r\nDid any of you success or found such implementation?",
"Hi folks,\r\n\r\nI am working on a use case which takes only nn.Linear layers (so the code worked fine for BERT) but it didn't for GPT-2 because of the same reason?\r\n\r\nIs there a way to change the Conv1D() to linear layers? Below is the actual gpt2 I was getting when using standard script form HF - \r\n\r\n```\r\nGPT2Model(\r\n (wte): Embedding(50257, 768)\r\n (wpe): Embedding(1024, 768)\r\n (drop): Dropout(p=0.1, inplace=False)\r\n (h): ModuleList(\r\n (0-11): 12 x GPT2Block(\r\n (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (attn): GPT2Attention(\r\n (c_attn): Conv1D()\r\n (c_proj): Conv1D()\r\n (attn_dropout): Dropout(p=0.1, inplace=False)\r\n (resid_dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (mlp): GPT2MLP(\r\n (c_fc): Conv1D()\r\n (c_proj): Conv1D()\r\n (act): NewGELUActivation()\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n )\r\n (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n)\r\n```"
] | 1,550 | 1,705 | 1,551 | NONE | null | Conv1D seems to be inherited from GPT but does not seem to serve any special purpose in GPT2 (BERT uses Linear).
Should GPT2's model be moved to using Linear (which is easier to grasp obvioulsy)? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/311/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/311/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/310 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/310/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/310/comments | https://api.github.com/repos/huggingface/transformers/issues/310/events | https://github.com/huggingface/transformers/pull/310 | 412,821,213 | MDExOlB1bGxSZXF1ZXN0MjU0OTQyMDc3 | 310 | Few small nits in GPT-2's README code examples | {
"login": "spolu",
"id": 15067,
"node_id": "MDQ6VXNlcjE1MDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/15067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spolu",
"html_url": "https://github.com/spolu",
"followers_url": "https://api.github.com/users/spolu/followers",
"following_url": "https://api.github.com/users/spolu/following{/other_user}",
"gists_url": "https://api.github.com/users/spolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spolu/subscriptions",
"organizations_url": "https://api.github.com/users/spolu/orgs",
"repos_url": "https://api.github.com/users/spolu/repos",
"events_url": "https://api.github.com/users/spolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/spolu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"They were just basic typos :) Thanks Stanislas"
] | 1,550 | 1,550 | 1,550 | NONE | null | (unless these were on purpose as a responsible disclosure mechanism :p) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/310/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/310",
"html_url": "https://github.com/huggingface/transformers/pull/310",
"diff_url": "https://github.com/huggingface/transformers/pull/310.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/310.patch",
"merged_at": 1550741008000
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.