url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/208 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/208/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/208/comments | https://api.github.com/repos/huggingface/transformers/issues/208/events | https://github.com/huggingface/transformers/pull/208 | 400,951,566 | MDExOlB1bGxSZXF1ZXN0MjQ2MDE5MTc4 | 208 | Merge run_squad.py and run_squad2.py | {
"login": "Liangtaiwan",
"id": 20909894,
"node_id": "MDQ6VXNlcjIwOTA5ODk0",
"avatar_url": "https://avatars.githubusercontent.com/u/20909894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Liangtaiwan",
"html_url": "https://github.com/Liangtaiwan",
"followers_url": "https://api.github.com/users/Liangtaiwan/followers",
"following_url": "https://api.github.com/users/Liangtaiwan/following{/other_user}",
"gists_url": "https://api.github.com/users/Liangtaiwan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Liangtaiwan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Liangtaiwan/subscriptions",
"organizations_url": "https://api.github.com/users/Liangtaiwan/orgs",
"repos_url": "https://api.github.com/users/Liangtaiwan/repos",
"events_url": "https://api.github.com/users/Liangtaiwan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Liangtaiwan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"#152 run squad pull request with Squad 2.0 was originally one file but was asked to be separated into two files for commit. #174",
"@abeljim Thanks for your replying, but I think that they should merge to a single file due to easier maintain and too many repeat code.",
"How many epochs used in SQuAD2.0 in your test?\r\n\r\n> @abeljim ",
"> How many epochs used in SQuAD2.0 in your test?\r\n> \r\n> > @abeljim\r\n\r\nJust 2",
"Ok I didn't notice the original implementation had these scripts merged in a single file.\r\nI guess we merge these scripts together here also.\r\nDo you want to update your branch to resolve the merge conflicts @Liangtaiwan?",
"Ok merged, thanks @Liangtaiwan ",
"Hi @Liangtaiwan and @abeljim , should models trained on SQuAD 2 use the `version_2_with_negative` flag when evaluating on SQuAD 1.1 dev sets? I'm noticing more than a 10 point difference with and without the flag. Thanks!\r\n\r\nResults with the flag appear to be closer to the performance on the answerable examples in SQuAD 2, but I wanted to confirm.",
"Hi @samsontmr, that's a good question.\r\n\r\nIn my opinion, since there is still a plausible answer to the unanswerable questions in SQuAD 2.0, you should use the version ```verions_2_with_negative``` when training.",
"Thanks! I think you mean when testing?",
"Hi @samsontmr, sorry for misunderstanding your question.\r\nIf you trained the model with ```verions_2_with_negative``` on SQuAD 2.0, the only difference when you evaluate on SQuAD 1.1 is post-processing. \r\nWith ```verions_2_with_negative``` flag when testing, the model would output unanswerable which is not a valid choice in SQuAD 1.1. This might the reason why there is a 10 point difference with or without the flag.\r\nI don't think you need ```verions_2_with_negative``` flag when testing. \r\nCould you paste the result and the version of transformers you used? ",
"I see! Unfortunately I didn't save the results, but I got an F1 of 70+ with the flag, which is closer to the squad 2 \"hasAns\" results, and 80+ without the flag."
] | 1,547 | 1,597 | 1,549 | CONTRIBUTOR | null | Merge run_squad.py and run_squad2.py to a single file as [official's Bert](https://github.com/google-research/bert).
I did integration test by myself and run once.
The following is the scores with base-uncased model
| Task | Exact | F1 |
| ------------- | ------------- | ------------- |
| SQuAD v1.1 | 80.90823084200568| 88.03529050425266 |
| SQuAD v2.0 | 72.56801145456078| 75.65274647953608 | | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/208/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/208",
"html_url": "https://github.com/huggingface/transformers/pull/208",
"diff_url": "https://github.com/huggingface/transformers/pull/208.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/208.patch",
"merged_at": 1549379704000
} |
https://api.github.com/repos/huggingface/transformers/issues/207 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/207/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/207/comments | https://api.github.com/repos/huggingface/transformers/issues/207/events | https://github.com/huggingface/transformers/issues/207 | 400,885,697 | MDU6SXNzdWU0MDA4ODU2OTc= | 207 | AttributeError: 'NoneType' object has no attribute 'start_logit' | {
"login": "rahular",
"id": 1104544,
"node_id": "MDQ6VXNlcjExMDQ1NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1104544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rahular",
"html_url": "https://github.com/rahular",
"followers_url": "https://api.github.com/users/rahular/followers",
"following_url": "https://api.github.com/users/rahular/following{/other_user}",
"gists_url": "https://api.github.com/users/rahular/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rahular/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahular/subscriptions",
"organizations_url": "https://api.github.com/users/rahular/orgs",
"repos_url": "https://api.github.com/users/rahular/repos",
"events_url": "https://api.github.com/users/rahular/events{/privacy}",
"received_events_url": "https://api.github.com/users/rahular/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you post a self-contained example to reproduce your error ? Which version of python, pytorch and pytorch-pretrained-bert are you using?",
"Closing since there is no recent activity. Feel free to re-open if needed.",
"I ran into the same issue. Any pointers on how I could triage this further?",
"@thomwolf sorry, was busy with something else. Will post a self-sufficient example and possibly a PR with the fix soon.",
"=============================================================================================================================================================================\r\nReinstalling:\r\n subscription-manager-rhsm x86_64 1.20.10-7.el6 rhel-6-server-rpms 285 k\r\n\r\nTransaction Summary\r\n=============================================================================================================================================================================\r\nReinstall 1 Package(s)\r\n\r\nTotal size: 285 k\r\nInstalled size: 379 k\r\nIs this ok [y/N]: y\r\nDownloading Packages:\r\nRunning Transaction Test\r\nTraceback (most recent call last):\r\n File \"/usr/bin/yum\", line 29, in <module>\r\n yummain.user_main(sys.argv[1:], exit_code=True)\r\n File \"/usr/share/yum-cli/yummain.py\", line 298, in user_main\r\n errcode = main(args)\r\n File \"/usr/share/yum-cli/yummain.py\", line 227, in main\r\n return_code = base.doTransaction()\r\n File \"/usr/share/yum-cli/cli.py\", line 547, in doTransaction\r\n testcb = RPMTransaction(self, test=True)\r\n File \"/usr/lib/python2.6/site-packages/yum/rpmtrans.py\", line 198, in __init__\r\n self._setupOutputLogging(base.conf.rpmverbosity)\r\n File \"/usr/lib/python2.6/site-packages/yum/rpmtrans.py\", line 225, in _setupOutputLogging\r\n self.base.ts.ts.scriptFd = self._writepipe.fileno()\r\nAttributeError: 'NoneType' object has no attribute 'scriptFd'\r\nUploading Enabled Repositories Report\r\nLoaded plugins: product-id, rhnplugin, subscription-manager\r\nLoaded plugins: product-id, rhnplugin, subscription-manager\r\nLoaded plugins: product-id, rhnplugin, subscription-manager\r\nLoaded plugins: product-id, rhnplugin, subscription-manager\r\nLoaded plugins: product-id, rhnplugin, subscription-manager\r\n",
"--> Running transaction check\r\n---> Package glibc-headers.x86_64 0:2.12-1.212.el6 will be installed\r\n--> Processing Dependency: kernel-headers >= 2.2.1 for package: glibc-headers-2.12-1.212.el6.x86_64\r\n--> Processing Dependency: kernel-headers for package: glibc-headers-2.12-1.212.el6.x86_64\r\n---> Package irqbalance.x86_64 2:1.0.7-9.el6 will be an update\r\n--> Processing Dependency: kernel >= 2.6.32-358.2.1 for package: 2:irqbalance-1.0.7-9.el6.x86_64\r\n--> Finished Dependency Resolution\r\nError: Package: 2:irqbalance-1.0.7-9.el6.x86_64 (rhel-6-server-rpms)\r\n Requires: kernel >= 2.6.32-358.2.1\r\n Installed: kernel-2.6.32-131.0.15.el6.x86_64 (@anaconda-RedHatEnterpriseLinux-201105101844.x86_64/6.1)\r\n kernel = 2.6.32-131.0.15.el6\r\n kernel = 2.6.32-131.0.15.el6\r\nError: Package: glibc-headers-2.12-1.212.el6.x86_64 (rhel-6-server-rpms)\r\n Requires: kernel-headers >= 2.2.1\r\nError: Package: glibc-headers-2.12-1.212.el6.x86_64 (rhel-6-server-rpms)\r\n Requires: kernel-headers\r\n You could try using --skip-broken to work around the problem\r\n You could try running: rpm -Va --nofiles --nodigest\r\nUploading Enabled Repositories Report\r\n"
] | 1,547 | 1,554 | 1,549 | CONTRIBUTOR | null | In the `run_squad2` example notebook, the `write_predictions` method fails because `best_non_null_entry` is `None`
```
Evaluating: 100%|βββββββββββββββββββββββββββ| 1529/1529 [05:12<00:00, 4.88it/s]
01/18/2019 21:42:28 - INFO - __main__ - Writing predictions to: ./models/squad2/predictions.json
01/18/2019 21:42:28 - INFO - __main__ - Writing nbest to: ./models/squad2/nbest_predictions.json
Traceback (most recent call last):
File "run_squad2.py", line 1075, in <module>
main()
File "run_squad2.py", line 1071, in main
output_nbest_file, output_null_log_odds_file, args.verbose_logging, True, args.null_score_diff_threshold)
File "run_squad2.py", line 612, in write_predictions
score_diff = score_null - best_non_null_entry.start_logit - (
AttributeError: 'NoneType' object has no attribute 'start_logit'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/207/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/206 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/206/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/206/comments | https://api.github.com/repos/huggingface/transformers/issues/206/events | https://github.com/huggingface/transformers/issues/206 | 400,775,467 | MDU6SXNzdWU0MDA3NzU0Njc= | 206 | Classifier example not training on CoLa data | {
"login": "ironflood",
"id": 11771531,
"node_id": "MDQ6VXNlcjExNzcxNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/11771531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ironflood",
"html_url": "https://github.com/ironflood",
"followers_url": "https://api.github.com/users/ironflood/followers",
"following_url": "https://api.github.com/users/ironflood/following{/other_user}",
"gists_url": "https://api.github.com/users/ironflood/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ironflood/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ironflood/subscriptions",
"organizations_url": "https://api.github.com/users/ironflood/orgs",
"repos_url": "https://api.github.com/users/ironflood/repos",
"events_url": "https://api.github.com/users/ironflood/events{/privacy}",
"received_events_url": "https://api.github.com/users/ironflood/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Try this for CoLA: `--bert_model bert-base-uncased --do_lower_case`. \r\nYou may also need to increase `num_train_epochs` or `learning_rate` a little. \r\n",
"Thanks for your suggestions, but when following them nothing changes, it always predict one class regardless.\r\nTried:\r\n- 3 and 10 epochs\r\n- different learning rates: 5e-3, 5e-4, 5e-5\r\n- all done with --bert_model bert-base-uncased --do_lower_case (but at the same time being able to use the multi language + Cased input would be an important plus)\r\n\r\nOne of the new command tested:\r\n`run_classifier.py --task_name CoLA --do_train --do_eval --data_dir ../data/CoLA-BALANCED/ --bert_model bert-base-uncased --do_lower_case --max_seq_length 128 --train_batch_size 32 --learning_rate 5e-4 --num_train_epochs 10.0 --output_dir output/cola/`\r\n\r\nThe console output:\r\nhttps://gist.github.com/ironflood/150618e6f9cb56572729bf282c9cd2aa\r\n\r\nThe rebalanced train & test dataset didn't change from first message and I checked its validity.",
"I think 5e-3 and 5e-4 are too high; they are even higher than pre-training. \r\n\r\nI'm getting eval_accuracy = 0.76 (i.e. not one class) with `--learning_rate 5e-5 --num_train_epochs 3.0` (other arguments and dataset are same as yours), so I have no idea why you didn't. \r\n",
"Thanks for testing it out. I tried again 5e-5 and I'm getting close to your result as well. As I tried earlier this LR could it be that sometimes it doesn't converge properly because of random seed? Another question: testing out 5e-5 and 5e-6 on the cased multi language model I'm getting approx 0.68 acc instead of 0.78 with lowercased standard model on 3 and 10 epoch training, is it only because of the case adding more difficulty?",
"No, random seed is always same. See [this line](https://github.com/huggingface/pytorch-pretrained-BERT/blob/0a9d7c7edb20a3e82cfbb4b72515575543784823/examples/run_classifier.py#L421) in the code. \r\nFor multilingual models, you should refer [multilingual.md](https://github.com/google-research/bert/blob/master/multilingual.md#results).",
"Closing since there is no recent activity. Feel free to re-open if needed."
] | 1,547 | 1,549 | 1,549 | NONE | null | Hi,
I obtained strange classification eval results (always predicting the same label) when trying out the `run_classifier.py` after cloning the repo (no modif) so to dig a bit more I rebalanced the CoLA dataset (train.tsv and dev.tsv) to have a better understanding of what is happening. When running the classifier example the network still doesn't learn and keeps predicting the same label. Any idea why? Was thinking the issue might be in saving / loading the state_dict after training so I bypassed it by using directly the model trained in train loop but to no avail, the results are the same. Any pointers? Am I missing something big here?
The rebalanced & randomized CoLA dataset (simply sampled down the majority class to minority one).
https://drive.google.com/file/d/1_QjnknEusQZgbhTqJcFhBLDrRZXQQRZ6/view?usp=sharing
My training command:
`run_classifier.py --task_name CoLA --do_train --do_eval --data_dir ../data/CoLA-BALANCED/ --bert_model bert-base-multilingual-cased --max_seq_length 128 --train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir output/cola`
The output digits are always in favor of one class, example:
```
eval batch#0 print(digits)
[[ 0.05559488 0.00027706]
[ 0.05565088 0.00031819]
[ 0.05568472 0.00032058]
[ 0.05567677 0.00027802]
[ 0.05567478 0.00028492]
[ 0.05566313 0.00030545]
[ 0.05558664 0.00028202]
[ 0.05569095 0.0002955 ]]
eval batch#1 print(digits)
[[ 0.05566129 0.00032648]
[ 0.05567207 0.00029943]
[ 0.05569698 0.00030764]
[ 0.05563145 0.00030007]
[ 0.05566984 0.00032966]
[ 0.05565657 0.00032679]
[ 0.05569271 0.00030621]
[ 0.05561762 0.00030394]]
```
Some training examples:
```
01/18/2019 16:28:12 - INFO - __main__ - *** Example ***
01/18/2019 16:28:12 - INFO - __main__ - guid: train-0
01/18/2019 16:28:12 - INFO - __main__ - tokens: [CLS] Ent ##hus ##ias ##tic golf ##ers with large hand ##icap ##s can be good company . [SEP]
01/18/2019 16:28:12 - INFO - __main__ - input_ids: 101 63412 15471 15465 13275 32288 10901 10169 12077 15230 73130 10107 10944 10347 15198 12100 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:28:12 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:28:12 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:28:12 - INFO - __main__ - label: 1 (id = 1)
01/18/2019 16:28:12 - INFO - __main__ - *** Example ***
01/18/2019 16:28:12 - INFO - __main__ - guid: train-1
01/18/2019 16:28:12 - INFO - __main__ - tokens: [CLS] The horse jump ##ed over the fe ##nce . [SEP]
01/18/2019 16:28:12 - INFO - __main__ - input_ids: 101 10117 30491 54941 10336 10491 10105 34778 12150 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:28:12 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:28:12 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:28:12 - INFO - __main__ - label: 1 (id = 1)
01/18/2019 16:28:12 - INFO - __main__ - *** Example ***
01/18/2019 16:28:12 - INFO - __main__ - guid: train-2
01/18/2019 16:28:12 - INFO - __main__ - tokens: [CLS] Brown equipped Jones a camera . [SEP]
01/18/2019 16:28:12 - INFO - __main__ - input_ids: 101 12623 41880 12298 169 26665 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:28:12 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:28:12 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:28:12 - INFO - __main__ - label: 0 (id = 0)
01/18/2019 16:28:12 - INFO - __main__ - *** Example ***
01/18/2019 16:28:12 - INFO - __main__ - guid: train-3
01/18/2019 16:28:12 - INFO - __main__ - tokens: [CLS] I destroyed there . [SEP]
01/18/2019 16:28:12 - INFO - __main__ - input_ids: 101 146 24089 11155 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:28:12 - INFO - __main__ - input_mask: 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:28:12 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:28:12 - INFO - __main__ - label: 0 (id = 0)
```
Some eval examples:
```
01/18/2019 16:32:04 - INFO - __main__ - *** Example ***
01/18/2019 16:32:04 - INFO - __main__ - guid: dev-0
01/18/2019 16:32:04 - INFO - __main__ - tokens: [CLS] Dana walk ##ed and Leslie ran . [SEP]
01/18/2019 16:32:04 - INFO - __main__ - input_ids: 101 27149 33734 10336 10111 25944 17044 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:32:04 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:32:04 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:32:04 - INFO - __main__ - label: 1 (id = 1)
01/18/2019 16:32:04 - INFO - __main__ - *** Example ***
01/18/2019 16:32:04 - INFO - __main__ - guid: dev-1
01/18/2019 16:32:04 - INFO - __main__ - tokens: [CLS] The younger woman might have been tall and , and the older one def ##inite ##ly was , bl ##ond . [SEP]
01/18/2019 16:32:04 - INFO - __main__ - input_ids: 101 10117 27461 18299 20970 10529 10590 36243 10111 117 10111 10105 18757 10464 100745 100240 10454 10134 117 21484 26029 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:32:04 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:32:04 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:32:04 - INFO - __main__ - label: 0 (id = 0)
01/18/2019 16:32:04 - INFO - __main__ - *** Example ***
01/18/2019 16:32:04 - INFO - __main__ - guid: dev-2
01/18/2019 16:32:04 - INFO - __main__ - tokens: [CLS] What the water did to the bot ##tle was fill it . [SEP]
01/18/2019 16:32:04 - INFO - __main__ - input_ids: 101 12489 10105 12286 12172 10114 10105 41960 16406 10134 20241 10271 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:32:04 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:32:04 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/18/2019 16:32:04 - INFO - __main__ - label: 0 (id = 0)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/206/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/205 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/205/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/205/comments | https://api.github.com/repos/huggingface/transformers/issues/205/events | https://github.com/huggingface/transformers/issues/205 | 400,738,031 | MDU6SXNzdWU0MDA3MzgwMzE= | 205 | What is the meaning of Attention Mask | {
"login": "jianyucai",
"id": 28853070,
"node_id": "MDQ6VXNlcjI4ODUzMDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28853070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jianyucai",
"html_url": "https://github.com/jianyucai",
"followers_url": "https://api.github.com/users/jianyucai/followers",
"following_url": "https://api.github.com/users/jianyucai/following{/other_user}",
"gists_url": "https://api.github.com/users/jianyucai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jianyucai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jianyucai/subscriptions",
"organizations_url": "https://api.github.com/users/jianyucai/orgs",
"repos_url": "https://api.github.com/users/jianyucai/repos",
"events_url": "https://api.github.com/users/jianyucai/events{/privacy}",
"received_events_url": "https://api.github.com/users/jianyucai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, this conversion is done inside the model, see this line: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L626\r\n(we don't use infinity but a large value that works also when the model is used in half precision mode)",
"Thanks for your answer. Well, I still have a little problem understanding what you mean in the last sentence:\r\n\r\n> (we don't use infinity but a large value that works also when the model is used in half precision mode)\r\n\r\nI make a simple experiment:\r\n\r\n```python\r\n>>> import torch\r\n>>> import numpy as np\r\n>>> inf = np.array(-np.inf)\r\n>>> inf\r\narray(-inf)\r\n>>> inf = torch.from_numpy(inf)\r\n>>> inf\r\ntensor(-inf, dtype=torch.float64)\r\n>>> inf.half()\r\ntensor(-inf, dtype=torch.float16)\r\n```\r\n\r\nIt seems `-inf` works well.\r\nIn another experiment, I tried the following: \r\n\r\n```python\r\n>>> scores = torch.FloatTensor([1, 2, 3, 4, 4])\r\n>>> mask = torch.ByteTensor([1, 1, 1, 0, 0])\r\n>>> scores.masked_fill_(mask == 0, -np.inf)\r\ntensor([1., 2., 3., -inf, -inf])\r\n>>> scores.half()\r\ntensor([1., 2., 3., -inf, -inf], dtype=torch.float16)\r\n```\r\nIt seems that both 2 experiments works well, so I don't get what is the problem to use `-inf` in half precision mode\r\n\r\nThank you",
"attention mask is -10000.0 for positins that we do not want attention for\r\n\r\nit is set to -10000.0 in \r\n\r\n # Since attention_mask is 1.0 for positions we want to attend and 0.0 for\r\n # masked positions, this operation will create a tensor which is 0.0 for\r\n # positions we want to attend and -10000.0 for masked positions.\r\n # Since we are adding it to the raw scores before the softmax, this is\r\n # effectively the same as removing these entirely.\r\n \r\nin [get_extended_attention_mask](https://github.com/huggingface/transformers/blob/e95d433d77727a9babadf008dd621a2326d37303/src/transformers/modeling_utils.py#L700)"
] | 1,547 | 1,660 | 1,547 | NONE | null | Hi, I noticed that there is something called `Attention Mask` in the model.
In the annotation of class `BertForQuestionAnswering`,
```python
`attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices
selected in [0, 1]. It's a mask to be used if the input sequence length is smaller than the max
input sequence length in the current batch. It's the mask that we typically use for attention when
a batch has varying length sentences.
```
And its usage is in class `BertSelfAttention`, function `forward`,
```python
# Apply the attention mask is (precomputed for all layers in BertModel forward() function)
attention_scores = attention_scores + attention_mask
```
It seems the attention_mask is used to add 1 to the scores for positions that is taken up by real tokens, and add 0 to the positions outside current sequence.
Then, why not set the scores to `-inf` where the positions are outside the current sequence. Then pass the scores to a softmax layer, those score will become 0 as we want. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/205/reactions",
"total_count": 12,
"+1": 12,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/205/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/204 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/204/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/204/comments | https://api.github.com/repos/huggingface/transformers/issues/204/events | https://github.com/huggingface/transformers/issues/204 | 400,582,170 | MDU6SXNzdWU0MDA1ODIxNzA= | 204 | Two to Three mask word prediction at the same sentence is very complex | {
"login": "MuruganR96",
"id": 35978784,
"node_id": "MDQ6VXNlcjM1OTc4Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/35978784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MuruganR96",
"html_url": "https://github.com/MuruganR96",
"followers_url": "https://api.github.com/users/MuruganR96/followers",
"following_url": "https://api.github.com/users/MuruganR96/following{/other_user}",
"gists_url": "https://api.github.com/users/MuruganR96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MuruganR96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MuruganR96/subscriptions",
"organizations_url": "https://api.github.com/users/MuruganR96/orgs",
"repos_url": "https://api.github.com/users/MuruganR96/repos",
"events_url": "https://api.github.com/users/MuruganR96/events{/privacy}",
"received_events_url": "https://api.github.com/users/MuruganR96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @MuruganR96, from my experiments two to three mask word prediction doesn't seems to be possible with BERT.",
"thanks @thomwolf sir"
] | 1,547 | 1,548 | 1,548 | NONE | null | Two to Three mask word prediction at the same sentence also very complex.
how to get good accuracy?
if i have to pretrained bert model and own dataset with **masked_lm_prob=0.25** (https://github.com/google-research/bert#pre-training-with-bert), what will happened?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/204/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/203 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/203/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/203/comments | https://api.github.com/repos/huggingface/transformers/issues/203/events | https://github.com/huggingface/transformers/issues/203 | 400,544,254 | MDU6SXNzdWU0MDA1NDQyNTQ= | 203 | Add some new layers from BertModel and then 'grad' error occurs | {
"login": "lhbrichard",
"id": 33123730,
"node_id": "MDQ6VXNlcjMzMTIzNzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/33123730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhbrichard",
"html_url": "https://github.com/lhbrichard",
"followers_url": "https://api.github.com/users/lhbrichard/followers",
"following_url": "https://api.github.com/users/lhbrichard/following{/other_user}",
"gists_url": "https://api.github.com/users/lhbrichard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhbrichard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhbrichard/subscriptions",
"organizations_url": "https://api.github.com/users/lhbrichard/orgs",
"repos_url": "https://api.github.com/users/lhbrichard/repos",
"events_url": "https://api.github.com/users/lhbrichard/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhbrichard/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you can share a (minimal) example reproducing the error, I can have a look.",
"I'm closing this. Feel free to re-open and share more information if you still have some issues."
] | 1,547 | 1,548 | 1,548 | NONE | null | I wanna do the fine-tuning work by adding a textcnn on the base of BertModel. I write a new class and add two layers of conv (like a textcnn) basically on Embedding Layer. And then an error occurs, called "grad can be implicitly created only for scalar outputs" i search for the Internet and can't find a good solution to that, hope someone can solve it | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/203/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/202 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/202/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/202/comments | https://api.github.com/repos/huggingface/transformers/issues/202/events | https://github.com/huggingface/transformers/issues/202 | 400,521,941 | MDU6SXNzdWU0MDA1MjE5NDE= | 202 | training new BERT seems not working | {
"login": "haoyudong-97",
"id": 17803684,
"node_id": "MDQ6VXNlcjE3ODAzNjg0",
"avatar_url": "https://avatars.githubusercontent.com/u/17803684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haoyudong-97",
"html_url": "https://github.com/haoyudong-97",
"followers_url": "https://api.github.com/users/haoyudong-97/followers",
"following_url": "https://api.github.com/users/haoyudong-97/following{/other_user}",
"gists_url": "https://api.github.com/users/haoyudong-97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haoyudong-97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haoyudong-97/subscriptions",
"organizations_url": "https://api.github.com/users/haoyudong-97/orgs",
"repos_url": "https://api.github.com/users/haoyudong-97/repos",
"events_url": "https://api.github.com/users/haoyudong-97/events{/privacy}",
"received_events_url": "https://api.github.com/users/haoyudong-97/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @UCJerryDong,\r\n\r\nTraining BERT from scratch takes a (very) long time (see the paper for TPU training, an estimation is training time using GPUs is about a week using 64 GPUs), this script is more for fine-tuning (using the pre-training objective) than to train from scratch.\r\n\r\nDid you monitor the losses during training and wait for convergence?",
"Hi, I am trying to do something similar:) My guess is that `sample.txt` is too small. \r\n\r\n@thomwolf Just to confirm, the above code should produce a new BERT model from scratch that's based on the existing vocab file right? Thanks!",
"It seems to be problematic to generate new samples every epoch, at least for such a small corpus. \r\nThe model convergenced for me with `--num_train_epochs 50.0`, if I reuse the same `train_dataset` by adding `train_dataset = [train_dataset[i] for i in range(len(train_dataset))]` in the code.",
"Hi @thomwolf,\r\n\r\nI trained the model for an hour but the loss is always around 0.6-0.8 and never converges. I know it's computationally expensive to train the BERT; that's why I choose the very small dataset (sample.txt, which only has 36 lines).\r\n\r\nThe main issue is that I have tried the same dataset with the [original tensorflow version BERT](https://github.com/google-research/bert.git) and it converges within 5 minutes:\r\n\r\n> next_sentence_accuracy = 1.0\r\nnext_sentence_loss = 0.00012585879 \r\n\r\nThat's why I'm wondering if something is wrong with the model. I have also checked the output of each forward step, and found out that the encoder_layers have similar row values, i.e. rows in the matrix \"encoder_layers\" are similar to each other. \r\n` encoded_layers = self.encoder(embedding_output,\r\n extended_attention_mask,\r\n output_all_encoded_layers=output_all_encoded_layers)`",
"Ok, that's strange indeed. Can you share your code? I can have a look.\r\n\r\nI haven't tried the pre-training script myself yet.",
"Thanks for helping! I have created a [github repo](https://github.com/UCJerryDong/pytorch_bert.git) with my modified code. Also, I have tried what @nhatchan suggests (thanks!) and it does work.\r\n\r\nBut I feel that shouldn't be the correct way for final solution as it stores every data on memory and it will require too much if training with real dataset.",
"Thank, I'll have a look. Can you also show me what you did with the Tensorflow model so I can compare the behaviors in the two cases?",
"I just follow the instructions under section [Pre-training with BERT](https://github.com/google-research/bert)",
"> But I feel that shouldn't be the correct way for final solution as it stores every data on memory and it will require too much if training with real dataset.\r\n\r\n@UCJerryDong Yes, I just showed one of the differences from Tensorflow version, and that's why I didn't send a PR addressing this. I'm even not sure whether this affects the model performance when you train with real dataset or not. \r\n\r\nIncidentally, I'm also trying to do something similar, with real data, but still losses seems higher than that of Tensorflow version. I suspect some of minor differences (like this, issues 195 and 38), but not yet figured it out. \r\n",
"Hi guys,\r\n\r\n> see the paper for TPU training, an estimation is training time using GPUs is about a week using 64 GPUs\r\n\r\nBtw, there is an article on this topic http://timdettmers.com/2018/10/17/tpus-vs-gpus-for-transformers-bert/\r\n\r\nI was wondering, maybe someone tried tweaking some parameters in the transformer, so that it could converge much faster (ofc, maybe at the expense of accuracy), i.e.:\r\n- Initializing the embedding layer with FastText / your embeddings of choice - in our tests it boosted accuracy and convergence with more plain models;\r\n- Using a more standard 200 or 300 dimension embedding instead of 768 (also tweaking the hidden size accordingly);\r\n\r\nPersonally for me the allure of transformer is not really about the state-of-the-art accuracy, but about having the same architecture applicable for any sort of NLP task (i.e. QA tasks or SQUAD like objectives may require a custom engineering or some non-transferrable models).\r\n",
"HIοΌI have a problem that which line code leet the pretrained model freezed(fine-turn) but no trainable ",
"Hi @snakers4 and @BITLsy, please open new issues for your problems and discussion.",
"Hi @thomwolf Do you have any update on this? Is the issue resolved?",
"Hi @ntomita yes, this is just a differing behavior between the TensorFlow and PyTorch training code.\r\n- the original TensorFlow code does `static masking` in which the masking of the training dataset is computed once for all so you can quickly overfit on a small training set with a few epochs\r\n- in our code we use `dynamic masking` where the masking is generated on the fly so overfitting a single batch takes more epochs.\r\n\r\nThe recent RoBERTa paper (http://arxiv.org/abs/1907.11692) compares the two approaches (see section 4.1) and conclude that `dynamic masking` is comparable or slightly better than `static masking` (as expected I would say).",
"Hi @thomwolf that's awesome! I was working on pretraining a modified BERT model using this library with our own data for a quite while, struggled convergence, and wondering if I should try other libraries like original tf implementation or fairseq as other people reported slower convergence with this library. I use dynamic masking so what you're saying is reasonable. I also saw recently that MS azure group has successfully pretrained their models which are implemented with this library. Since you keep telling people that this library is not meant for pretraining I thought there are some critical bugs in models or optimization processes. I needed some confidence to keep working with this library so thanks for your follow-up!",
"No \"critical bugs\" indeed lol :-)\r\nYou can use this library as the basis for training from scratch (like Microsoft and NVIDIA did).\r\nWe just don't provide training scripts (at the current stage, maybe we'll add some later but I would like to keep them simple if we do).",
"Bert is way to sensitive to the learning rate and data as well. \r\nSomehow it makes thing back to 20 years ago when deep learning is still a unusable approch.\r\nIt's not the fault of libraries writers. The model itself has that problem. "
] | 1,547 | 1,642 | 1,551 | NONE | null | I tried to train a BERT mode from scratch by "run_lm_finetuning.py" with toy training data (samples/sample.txt) by changing the following:
`#model = BertForPreTraining.from_pretrained(args.bert_model)`
`bert_config = BertConfig.from_json_file('bert_config.json')`
`model = BertForPreTraining(bert_config) `
where the json file comes from[ BERT-Base, Multilingual Cased](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip)
To check the correctness of training, I printed the scores of sequential relationship (for predicting next sentence tasks) in the "pytorch_pretrained_bert/modeling.py"
`prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output)`
`print(seq_relationship_score)`
And the result was (just picking an example from a single batch).
Tensor([[-0.1078, -0.2696],
[-0.1425, -0.3207],
[-0.0179, -0.2271],
[-0.0260, -0.2963],
[-0.1410, -0.2506],
[-0.0566, -0.3013],
[-0.0874, -0.3330],
[-0.1568, -0.2580],
[-0.0144, -0.3072],
[-0.1527, -0.3178],
[-0.1288, -0.2998],
[-0.0439, -0.3267],
[-0.0641, -0.2566],
[-0.1496, -0.3696],
[ 0.0286, -0.2495],
[-0.0922, -0.3002]], device='cuda:0', grad_fn=AddmmBackward)
Notice since the scores for the first column were higher than for the second column, the result showed that the models predicted all batch as not next sentence or next sentence. And this result was universal for all batches. I feel this shouldn't be the case. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/202/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/201 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/201/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/201/comments | https://api.github.com/repos/huggingface/transformers/issues/201/events | https://github.com/huggingface/transformers/pull/201 | 400,365,120 | MDExOlB1bGxSZXF1ZXN0MjQ1NTY5NTY0 | 201 | run_squad2 Don't save model if do not train | {
"login": "Liangtaiwan",
"id": 20909894,
"node_id": "MDQ6VXNlcjIwOTA5ODk0",
"avatar_url": "https://avatars.githubusercontent.com/u/20909894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Liangtaiwan",
"html_url": "https://github.com/Liangtaiwan",
"followers_url": "https://api.github.com/users/Liangtaiwan/followers",
"following_url": "https://api.github.com/users/Liangtaiwan/following{/other_user}",
"gists_url": "https://api.github.com/users/Liangtaiwan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Liangtaiwan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Liangtaiwan/subscriptions",
"organizations_url": "https://api.github.com/users/Liangtaiwan/orgs",
"repos_url": "https://api.github.com/users/Liangtaiwan/repos",
"events_url": "https://api.github.com/users/Liangtaiwan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Liangtaiwan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,547 | 1,547 | 1,547 | CONTRIBUTOR | null | There is a bug in example/run_squad2.py
If the model do not train, the initialized value will cover the pertained model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/201/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/201",
"html_url": "https://github.com/huggingface/transformers/pull/201",
"diff_url": "https://github.com/huggingface/transformers/pull/201.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/201.patch",
"merged_at": 1547800091000
} |
https://api.github.com/repos/huggingface/transformers/issues/200 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/200/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/200/comments | https://api.github.com/repos/huggingface/transformers/issues/200/events | https://github.com/huggingface/transformers/pull/200 | 400,164,405 | MDExOlB1bGxSZXF1ZXN0MjQ1NDE1MzQx | 200 | Adding Transformer-XL pre-trained model | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Awesome work!\r\n\r\nI'm not sure I understand how this model integrates with BERT. Did Google release weights for the MLM + next sentence prediction task using the Transformer XL? And if they did not, how well do the classical LM weights performs for finetuning tasks?",
"Oh it's not integrated with BERT, that just another model. I'm adding it here to make it use the same easy-to-use interface (with pretrained/cached model and tokenizer) like I do for the OpenAI GPT in the other PR (#183)\r\nIt's a language model like OpenAI GPT (trained with just a classical LM loss).\r\nCheck the Transformer-XL repo/paper for more details!",
"Ok, managed to find the bug in this conversion.\r\nThe custom-made AdaptiveSofmax used in Transformer-XL indexes the next-cluster probability tokens in reverse-order (see indexing by `-i` on [line 141 of the original repo](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/utils/proj_adaptive_softmax.py#L141)) so we got worse performances on less frequent words.\r\nTough to find!\r\n\r\nFixing this conversion we got a test set perplexity of `18.213` on Wiki-103 to be compared to a reported result of `18.3` with the TensorFlow model.",
"#254 is also the main PR for the inclusion of Transformer-XL. Closing this PR.",
"Hi, I am assuming the pre-trained model available with huggingface is the `large` variant and not `base`?"
] | 1,547 | 1,588 | 1,549 | MEMBER | null | Add Transformer-XL (https://github.com/kimiyoung/transformer-xl) with the pre-trained WT103 model (maybe also the 1B-Word model).
The original Google/CMU PyTorch version (https://github.com/kimiyoung/transformer-xl/tree/master/pytorch) has been slightly modified to better match the TF version which has the SOTA results. I've mostly untied the relative positioning/word biases and changed the initialization of memory states (TODO: PR these modifications back up to the original repo).
The Google/CMU model can be converted with:
```bash
pytorch_pretrained_bert convert_transfo_xl_checkpoint [PATH_TO_TRANSFO_XL_FOLDER]/model.ckpt-0 [PATH_TO_SAVE_PT_DUMP]
```
And the corpus and vocabulary with:
```bash
pytorch_pretrained_bert convert_transfo_xl_checkpoint [PATH_TO_TRANSFO_XL_DATA_FOLDER]/cache.pkl [PATH_TO_SAVE_DATA_AND_VOCABULARY]
```
The evaluation can be run with
```bash
cd ./examples
python eval_transfo_xl.py --cuda --model_name [PATH_TO_PT_DUMP] --work_dir [PATH_TO_SAVE_LOG]
```
Currently I have slightly higher values for the perplexity with the PyTorch model (using the TF pre-trained weights) `20.4` versus `18.3` for the TF version, might try to investigate this a little bit further. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/200/reactions",
"total_count": 7,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/200/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/200",
"html_url": "https://github.com/huggingface/transformers/pull/200",
"diff_url": "https://github.com/huggingface/transformers/pull/200.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/200.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/199 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/199/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/199/comments | https://api.github.com/repos/huggingface/transformers/issues/199/events | https://github.com/huggingface/transformers/pull/199 | 399,974,616 | MDExOlB1bGxSZXF1ZXN0MjQ1Mjc1NTA0 | 199 | (very) minor update to README | {
"login": "davidefiocco",
"id": 4547987,
"node_id": "MDQ6VXNlcjQ1NDc5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidefiocco",
"html_url": "https://github.com/davidefiocco",
"followers_url": "https://api.github.com/users/davidefiocco/followers",
"following_url": "https://api.github.com/users/davidefiocco/following{/other_user}",
"gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions",
"organizations_url": "https://api.github.com/users/davidefiocco/orgs",
"repos_url": "https://api.github.com/users/davidefiocco/repos",
"events_url": "https://api.github.com/users/davidefiocco/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidefiocco/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,547 | 1,547 | 1,547 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/199/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/199",
"html_url": "https://github.com/huggingface/transformers/pull/199",
"diff_url": "https://github.com/huggingface/transformers/pull/199.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/199.patch",
"merged_at": 1547679113000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/198 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/198/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/198/comments | https://api.github.com/repos/huggingface/transformers/issues/198/events | https://github.com/huggingface/transformers/issues/198 | 399,671,672 | MDU6SXNzdWUzOTk2NzE2NzI= | 198 | HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /models.huggingface.co/bert/bert-base-uncased.tar.gz (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x000002456AF21710>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond',)) | {
"login": "laibamehnaz",
"id": 36405283,
"node_id": "MDQ6VXNlcjM2NDA1Mjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laibamehnaz",
"html_url": "https://github.com/laibamehnaz",
"followers_url": "https://api.github.com/users/laibamehnaz/followers",
"following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}",
"gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions",
"organizations_url": "https://api.github.com/users/laibamehnaz/orgs",
"repos_url": "https://api.github.com/users/laibamehnaz/repos",
"events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/laibamehnaz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, you need a (stable) internet connection to download the weights. This operation is only done once as the weights are then cached on you drive.",
"Thankyou so much!",
"@laibamehnaz have you solved the problem? I have a similar problem.",
"Hi,\r\nI am facing a similar issue, can anyone help with this? ",
"解ε³ζΉεΌ methodοΌ\r\n\r\nimport os\r\nos.environ['NO_PROXY'] = 'huggingface.co' # δΈθ΅°δ»£η\r\n\r\nor\r\n\r\nimport os\r\nimport requests\r\nos.environ['NO_PROXY'] = 'huggingface.co' # δΈθ΅°δ»£η\r\n\r\nor\r\n\r\nimport os\r\nos.environ['NO_PROXY'] = 'XXXXX.com' # δΈθ΅°δ»£ηοΌδ»»δ½η½ει½ε―δ»₯\r\n",
"\r\n\r\nThank you very much!\r\n\r\n> 解ε³ζΉεΌ methodοΌ\r\n> \r\n> import os os.environ['NO_PROXY'] = 'huggingface.co' # δΈθ΅°δ»£η\r\n> \r\n> or\r\n> \r\n> import os import requests os.environ['NO_PROXY'] = 'huggingface.co' # δΈθ΅°δ»£η\r\n> \r\n> or\r\n> \r\n> import os os.environ['NO_PROXY'] = 'XXXXX.com' # δΈθ΅°δ»£ηοΌδ»»δ½η½ει½ε―δ»₯\r\n\r\n"
] | 1,547 | 1,649 | 1,547 | NONE | null | I have been trying to executing this code :
import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
# Load pre-trained model tokenizer (vocabulary)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# Tokenized input
text = "Who was Jim Henson ? Jim Henson was a puppeteer"
tokenized_text = tokenizer.tokenize(text)
# Mask a token that we will try to predict back with `BertForMaskedLM`
masked_index = 6
tokenized_text[masked_index] = '[MASK]'
assert tokenized_text == ['who', 'was', 'jim', 'henson', '?', 'jim', '[MASK]', 'was', 'a', 'puppet', '##eer']
# Convert token to vocabulary indices
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
# Define sentence A and B indices associated to 1st and 2nd sentences (see paper)
segments_ids = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
# Convert inputs to PyTorch tensors
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
# Load pre-trained model (weights)
model = BertModel.from_pretrained('bert-base-uncased')
model.eval()
This is the error that I am getting continuously :
HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /models.huggingface.co/bert/bert-base-uncased.tar.gz (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x000002456AF21710>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond',))
Can you help me with this please | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/198/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/197 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/197/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/197/comments | https://api.github.com/repos/huggingface/transformers/issues/197/events | https://github.com/huggingface/transformers/issues/197 | 399,627,937 | MDU6SXNzdWUzOTk2Mjc5Mzc= | 197 | seems meet the GPU memory leak problem | {
"login": "zhangjcqq",
"id": 16695812,
"node_id": "MDQ6VXNlcjE2Njk1ODEy",
"avatar_url": "https://avatars.githubusercontent.com/u/16695812?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangjcqq",
"html_url": "https://github.com/zhangjcqq",
"followers_url": "https://api.github.com/users/zhangjcqq/followers",
"following_url": "https://api.github.com/users/zhangjcqq/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangjcqq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangjcqq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangjcqq/subscriptions",
"organizations_url": "https://api.github.com/users/zhangjcqq/orgs",
"repos_url": "https://api.github.com/users/zhangjcqq/repos",
"events_url": "https://api.github.com/users/zhangjcqq/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangjcqq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe use the `torch.no_grad()` context-manager which is the recommended way to perform inference with PyTorch now?\r\nSee https://pytorch.org/docs/stable/autograd.html#torch.autograd.no_grad",
"Closing this. Feel free to re-open if the issue is still there.",
"Hey there, I also have some memory leak problem when using the BertModel to produce embeddings to be used as features later on.\r\nI basically use the implementation as in the [usage example](https://huggingface.co/transformers/quickstart.html#quick-tour-usage).\r\n\r\n\r\n\r\n\r\n```python\r\nself.tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')\r\nself.model = trafo.BertModel.from_pretrained('bert-base-multilingual-cased')\r\nself.model.eval()\r\n\r\n...\r\n\r\ndef encode_text(self, text: str) -> np.ndarray:\r\n\tto_tokenize = f\"[CLS] {text} [SEP]\"\r\n tokenized_text = self.tokenizer.tokenize(to_tokenize)\r\n tokenized_text = tokenized_text[0:500]\r\n # Convert token to vocabulary indices\r\n indexed_tokens = self.tokenizer.convert_tokens_to_ids(tokenized_text)\r\n with torch.no_grad():\r\n tokens_tensor = torch.tensor([indexed_tokens]).data\r\n outputs = self.model(tokens_tensor)\r\n return outputs\r\n```\r\n\r\nI realized that if I comment out the line `outputs = self.model(tokens_tensor)` and just return some random numpy array as output, I have not increasing memory problem. So it seems to be calling the model with the tensor that increases the memory.\r\nFurther, if I use the 'bert-base-uncased' model, the memory stays the same as well. It only happens with the multi models.\r\n\r\nI used this method in a flask server application and made REST requests to it.\r\n\r\n",
"It's useful your assertion that it occurs _only_ when using _multi-lingual_ BERT model. Can you try to use `bert-base-multilingual-uncased` in order to do a comparison between these two? Perhaps there is a _performance bug_ in the multi-lingual setting.\r\n \r\n> Hey there, I also have some memory leak problem when using the BertModel to produce embeddings to be used as features later on.\r\n> I basically use the implementation as in the [usage example](https://huggingface.co/transformers/quickstart.html#quick-tour-usage).\r\n> \r\n> ```python\r\n> self.tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')\r\n> self.model = trafo.BertModel.from_pretrained('bert-base-multilingual-cased')\r\n> self.model.eval()\r\n> \r\n> ...\r\n> \r\n> def encode_text(self, text: str) -> np.ndarray:\r\n> \tto_tokenize = f\"[CLS] {text} [SEP]\"\r\n> tokenized_text = self.tokenizer.tokenize(to_tokenize)\r\n> tokenized_text = tokenized_text[0:500]\r\n> # Convert token to vocabulary indices\r\n> indexed_tokens = self.tokenizer.convert_tokens_to_ids(tokenized_text)\r\n> with torch.no_grad():\r\n> tokens_tensor = torch.tensor([indexed_tokens]).data\r\n> outputs = self.model(tokens_tensor)\r\n> return outputs\r\n> ```\r\n> \r\n> I realized that if I comment out the line `outputs = self.model(tokens_tensor)` and just return some random numpy array as output, I have not increasing memory problem. So it seems to be calling the model with the tensor that increases the memory.\r\n> Further, if I use the 'bert-base-uncased' model, the memory stays the same as well. It only happens with the multi models.\r\n> \r\n> I used this method in a flask server application and made REST requests to it.",
"So I tried it with `bert-base-multilingual-uncased` as well and it is the same behavior.\r\nI do not understand, why memory constantly grows on inference. To my understanding, I only push data through the network and then use the result layer's output. Before using the transformers, I had been using custom word embeddings trained in own keras models and I did not have this behavior. What am I missing here?",
"I've just seen that you're using **PyTorch 0.4.0**! What an oldest version you're using :D can you try to install the latest version of **PyTorch (1.3.1)** through `pip install --upgrade torch` and give us feedback? And please, if you can, update also the version of Transformers to the last (2.2.2) through `pip install --upgrade transformers`.\r\n\r\n> So I tried it with `bert-base-multilingual-uncased` as well and it is the same behavior.\r\n> I do not understand, why memory constantly grows on inference. To my understanding, I only push data through the network and then use the result layer's output. Before using the transformers, I had been using custom word embeddings trained in own keras models and I did not have this behavior. What am I missing here?",
"Hey there, I'm using the newest pytorch and transformers. You are probably mistaking this because of the first comment of this thread (by zhangjcqq) but that was not mine. I just hijacked this thread because it seemed to be the same problem I now have and there was no solution here. ",
"> Hey there, I'm using the newest pytorch and transformers. You are probably mistaking this because of the first comment of this thread (by zhangjcqq) but that was not mine. I just hijacked this thread because it seemed to be the same problem I now have and there was no solution here.\r\n\r\nSo you have tried out to upgrade PyTorch to 1.3.1 as suggested in my last comment, but there is the same error? If no, specify your environment and a piece of code in order to reproduce the bug.",
"I have the newest version of pytorch and transformers, yes. \r\n\r\nI have been monitoring the memory usage over 24h when I made ~ 300.000 requests. It seems that the memory increases constantly for quite some time but also seems to stabilize at a certain maximum. So the application started using ~2.5GB RAM and now stays at ~4.3GB.\r\n\r\nMaybe it has something to do with varying lengths of the texts I process? So that the longest texts are processed at a later point in time which then require the most RAM. Then, any subsequent text cannot need more so it stabilizes. Though this is just a thought. \r\n\r\nThanks already for your help, I'm off to Christmas vacations for now and will have a look at the issue in January again. I'll see if memory usage increases by then.\r\n",
"> flask\r\n\r\nI miss in the same problems\r\nbut without flask, it works",
"> I have the newest version of pytorch and transformers, yes.\r\n> \r\n> I have been monitoring the memory usage over 24h when I made ~ 300.000 requests. It seems that the memory increases constantly for quite some time but also seems to stabilize at a certain maximum. So the application started using ~2.5GB RAM and now stays at ~4.3GB.\r\n> \r\n> Maybe it has something to do with varying lengths of the texts I process? So that the longest texts are processed at a later point in time which then require the most RAM. Then, any subsequent text cannot need more so it stabilizes. Though this is just a thought.\r\n> \r\n> Thanks already for your help, I'm off to Christmas vacations for now and will have a look at the issue in January again. I'll see if memory usage increases by then.\r\n\r\nI have similar problems too. The memory usage gradually grows from 1xxxM to 3xxxM. @RomanTeucher @zhangjcqq did you manage to solve the issue?\r\n",
"@amjltc295 Did you find any solution to above issue?",
"> @amjltc295 Did you find any solution to above issue?\r\nwhen i run flask by:\r\n\r\nthreaded=False\r\n\r\nit works",
"> @amjltc295 Did you find any solution to above issue?\r\n\r\nIt seems that any python process takes up more and more RAM over time. A co-worker of mine had issues as well but with some other python project. We have our applications in docker containers that are limited in RAM, so they all run at 100% after some time.\r\nAnyways, the applications still works as it is supposed to be, so we did not put further research into that.\r\n",
"return outputs.cpu()",
"Reporting that this issue still exists with the forward pass of BertModel, specifically the call to BertModel.forward(), I notice that system RAM usage increases on this line each iteration.\r\nTransformers v3.1.0 \r\nPytorch v1.7.1\r\nCuda 11.0.221\r\nCudnn 8.0.5_0\r\nRTX 3090\r\nI am unable to run MNLI because of this, the RAM maxes out, and then system crashes towards the end of the 3rd training epoch. I will do some more digging and report back if I find a solution.",
"Mark. Still suffering this problem in Aug. 2022. \r\nSomeone can offer a solution for this could be highly appreciated ",
"I'm having the same issue running in Databricks with the following versions:\r\ntransformers 4.25.1\r\npytorch 1.13.1+cu117 \r\nNvidia Tesla T4"
] | 1,547 | 1,680 | 1,548 | NONE | null | I wrap the ``BertModel'' as a persistent object and init it once, then iteratively use it as the feature extractor to generate the feature of data batch, while it seems I met the GPU memory leak problem. After starting the program, the GPU memory usage keeps increasing until 'out-of-memory'. Some key codes are as following! Every 'self.bert_model.get_bert_feature()' executed, the GPU memory increased. I did simple debugging, and maybe the problem caused by the 'class BertEmbeddings.forward()'. My pytorch version is 0.4.0, py3. Waiting for your reply, thanks very much!
```python
class BertModel(PreTrainedBertModel):
def __init__(self, config):
super(BertModel, self).__init__(config)
self.embeddings = BertEmbeddings(config)
self.encoder = BertEncoder(config)
self.pooler = BertPooler(config)
self.apply(self.init_bert_weights)
def forward(self, input_ids, token_type_ids=None, attention_mask=None, output_all_encoded_layers=False):
#logger.info('bert forward')
if attention_mask is None:
attention_mask = torch.ones_like(input_ids)
if token_type_ids is None:
token_type_ids = torch.zeros_like(input_ids)
# We create a 3D attention mask from a 2D tensor mask.
# Sizes are [batch_size, 1, 1, to_seq_length]
# So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length]
# this attention mask is more simple than the triangular masking of causal attention
# used in OpenAI GPT, we just need to prepare the broadcast dimension here.
extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)
# Since attention_mask is 1.0 for positions we want to attend and 0.0 for
# masked positions, this operation will create a tensor which is 0.0 for
# positions we want to attend and -10000.0 for masked positions.
# Since we are adding it to the raw scores before the softmax, this is
# effectively the same as removing these entirely.
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
embedding_output = self.embeddings(input_ids, token_type_ids)
encoded_layers = self.encoder(embedding_output,
extended_attention_mask,
output_all_encoded_layers=output_all_encoded_layers)
return encoded_layers
class Bert_Instance(object):
def __init__(self, vocab_file, bert_model_path, device):
#tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case)
self.tokenizer = BertTokenizer(vocab_file)
self.model = BertModel.from_pretrained(bert_model_path)
self.device = device
print ('bert_device=', self.device)
self.model.to(self.device)
self.model.eval()
for para in self.model.parameters():
para.requires_grad = False
def get_feature(self, text_list, max_seq_length=50, layer=-1):
'''
Args:
text_list is a list to store the sentences, length is the sentence_number
Return:
(batch_size, seq_len+2, hidden_size)
'''
# a list, each dict element key is (ex_index, tokens, input_ids, input_mask, input_type_ids)
all_features = convert_examples_to_features(examples=text_list,
max_seq_length=max_seq_length,
tokenizer=self.tokenizer)
all_input_ids = torch.tensor([f['input_ids'] for f in all_features]).type(torch.cuda.LongTensor).to(self.device)
all_input_mask = torch.tensor([f['input_mask'] for f in all_features]).type(torch.cuda.LongTensor).to(self.device)
all_encoder_layers = self.model(all_input_ids,
token_type_ids=None,
attention_mask=all_input_mask)
return all_encoder_layers, all_input_mask
class Bert_Model(object):
def __init__(self, device):
self.bert_model = Bert_Instance(BERT_VOCAB, BERT_MODEL, device)
self.device = device
self.zp_pre_cache = {}
self.zp_post_cache = {}
self.candi_np = {}
self.cache = {'zp_pre': self.zp_pre_cache,
'zp_post': self.zp_post_cache,
'candi_np': self.candi_np}
def get_bert_feature(self, text_list, cache_name, batch_id, max_seq_length=30, layer=-1):
if batch_id in self.cache[cache_name].keys():
#res = torch.tensor(self.cache[cache_name][batch_id]).type(torch.cuda.FloatTensor).to(self.device)
res = self.cache[cache_name][batch_id]
return res
else:
res = self.bert_model.get_feature(text_list, max_seq_length, layer)
self.cache[cache_name][batch_id] = res
return res
class Experiment(object):
def __init__(self):
# load training data
with open(DIR+"data/train_data", "rb") as fin1, \
open(DIR+"data/emb","rb") as fin2:
self.train_generator = cPickle.load(fin1)
self.embedding_matrix, _ , _ = cPickle.load(fin2, encoding='iso-8859-1')
# load test data
self.test_generator = DataGenerator("test", 256)
self.dev_data = self.train_generator.generate_dev_data()
self.test_data = self.test_generator.generate_data()
# declare model architecture
self.model = Network(nnargs["embedding_size"], nnargs["embedding_dimension"], self.embedding_matrix, nnargs["hidden_dimension"], 2).to(NET_DEVICE)
self.bert_model = Bert_Model(BERT_DEVICE)
this_lr = 0.003
self.optimizer = optim.Adagrad(self.model.parameters(), lr = this_lr)
self.best = {"sum":0.0, "test_f":0.0, "best_test_f":0.0}
self.dropout = nnargs["dropout"]
def forward_step(self, data, mode, dropout=0.0):
zp_relative_index, zp_pre, zp_pre_mask, zp_post, zp_post_mask, candi_np, candi_np_mask, feature, zp_pre_words, zp_post_words, candi_np_words, batch_id = data2tensor(data)
batch_id = mode + '_' + str(batch_id)
zp_pre_bert, _ = self.bert_model.get_bert_feature(zp_pre_words, 'zp_pre', batch_id)
zp_post_bert, _ = self.bert_model.get_bert_feature(zp_post_words, 'zp_post', batch_id)
candi_np_bert, _ = self.bert_model.get_bert_feature(candi_np_words, 'candi_np', batch_id)
.....
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/197/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/196 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/196/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/196/comments | https://api.github.com/repos/huggingface/transformers/issues/196/events | https://github.com/huggingface/transformers/issues/196 | 399,155,566 | MDU6SXNzdWUzOTkxNTU1NjY= | 196 | TODO statement on Question/Answering Model | {
"login": "phatlast96",
"id": 10504024,
"node_id": "MDQ6VXNlcjEwNTA0MDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/10504024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phatlast96",
"html_url": "https://github.com/phatlast96",
"followers_url": "https://api.github.com/users/phatlast96/followers",
"following_url": "https://api.github.com/users/phatlast96/following{/other_user}",
"gists_url": "https://api.github.com/users/phatlast96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phatlast96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phatlast96/subscriptions",
"organizations_url": "https://api.github.com/users/phatlast96/orgs",
"repos_url": "https://api.github.com/users/phatlast96/repos",
"events_url": "https://api.github.com/users/phatlast96/events{/privacy}",
"received_events_url": "https://api.github.com/users/phatlast96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not really, I've moved to something else since I don't expect this to change significantly the results.\r\nI will remove the TODO."
] | 1,547 | 1,547 | 1,547 | NONE | null | Has this been confirmed?
https://github.com/huggingface/pytorch-pretrained-BERT/blob/647c98353090ee411e1ef9016b2a458becfe36f9/pytorch_pretrained_bert/modeling.py#L1084 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/196/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/195 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/195/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/195/comments | https://api.github.com/repos/huggingface/transformers/issues/195/events | https://github.com/huggingface/transformers/issues/195 | 398,799,873 | MDU6SXNzdWUzOTg3OTk4NzM= | 195 | Potentially redundant learning rate scheduling | {
"login": "nikitakit",
"id": 252225,
"node_id": "MDQ6VXNlcjI1MjIyNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/252225?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikitakit",
"html_url": "https://github.com/nikitakit",
"followers_url": "https://api.github.com/users/nikitakit/followers",
"following_url": "https://api.github.com/users/nikitakit/following{/other_user}",
"gists_url": "https://api.github.com/users/nikitakit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikitakit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikitakit/subscriptions",
"organizations_url": "https://api.github.com/users/nikitakit/orgs",
"repos_url": "https://api.github.com/users/nikitakit/repos",
"events_url": "https://api.github.com/users/nikitakit/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikitakit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Humm could be the case indeed. What do think about this @tholor?",
"As far as I can tell this was introduced in c8ea286048517d9072397d77f4de21b8483a4531 as a byproduct of adding float16 support, and was then copied to other example files as well.",
"I agree, there seems to be double LR scheduling. The applied LR is therefore lower than intended. Quick plot of the LR being set in the outer scope (i.e. in run_squad or run_lm_finetuning) vs. the inner one (in BERTAdam) shows this: \r\n\r\n![lr_schedule_debug](https://user-images.githubusercontent.com/1563902/51321145-49df5100-1a62-11e9-8908-516aaf9362d4.png)\r\n\r\nIn addition, I have noticed two further parts for potential clean up: \r\n1. I don't see a reason why the function `warmup_linear()` is implemented in two places: In `optimization.py` and in each example script. \r\n2. Is the method `optimizer.get_lr()` ever being called? There's actually another LR scheduling.\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/f040a43cb3954e14dc47a815de012ac3f87a85d0/pytorch_pretrained_bert/optimization.py#L79-L92",
"There is als an additional problem that causes the learning rate to not be set correctly in run_classifier.py. I created a pull request for that (and the double warmup problem): #218 ",
"Is there are something done for this double warmup bug?",
"Yes, @matej-svejda worked on this in https://github.com/huggingface/pytorch-pretrained-BERT/pull/218",
"I see that, but it isn't merge now?",
"No, not yet. As you can see in the PR it's still WIP and he committed only 4 hours ago. If you need the fix urgently, you can apply the changes easily locally. It's quite a small fix.",
"Sorry,I forget to see the time :)",
"By the way, how can I draw a picture about the LR schedule about BERT like yours. I see if use `print(optimizer.param_groups['lr'] `, the learning rate is always like I init it.",
"I have plotted `optimizer.param_groups[0][\"lr\"]` from here: \r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/f040a43cb3954e14dc47a815de012ac3f87a85d0/examples/run_lm_finetuning.py#L610-L616\r\n\r\nand `lr_scheduled` from here: \r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/f040a43cb3954e14dc47a815de012ac3f87a85d0/pytorch_pretrained_bert/optimization.py#L145-L152\r\n\r\nYour above could should actually throw an exception because `optimizer.param_groups` is a list. Try `optimizer.param_groups[0][\"lr\"]` or `lr_this_step`.",
"Ok this should be fixed in master now!"
] | 1,547 | 1,549 | 1,549 | NONE | null | In the following two code snippets below:
https://github.com/huggingface/pytorch-pretrained-BERT/blob/647c98353090ee411e1ef9016b2a458becfe36f9/examples/run_lm_finetuning.py#L570-L573
https://github.com/huggingface/pytorch-pretrained-BERT/blob/647c98353090ee411e1ef9016b2a458becfe36f9/examples/run_lm_finetuning.py#L611-L613
it appears that learning rate warmup is being done *twice*: once in the example file, and once inside the BertAdam class. Am I reading this wrong? Because I'm pretty sure the BertAdam class performs its own warm-up when initialized with those arguments.
Here is an excerpt from the BertAdam class, where warm-up is also applied:
https://github.com/huggingface/pytorch-pretrained-BERT/blob/647c98353090ee411e1ef9016b2a458becfe36f9/pytorch_pretrained_bert/optimization.py#L146-L150
This also applies to other examples, e.g.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/647c98353090ee411e1ef9016b2a458becfe36f9/examples/run_squad.py#L848-L851
https://github.com/huggingface/pytorch-pretrained-BERT/blob/647c98353090ee411e1ef9016b2a458becfe36f9/examples/run_squad.py#L909-L911 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/195/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/194 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/194/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/194/comments | https://api.github.com/repos/huggingface/transformers/issues/194/events | https://github.com/huggingface/transformers/issues/194 | 398,771,339 | MDU6SXNzdWUzOTg3NzEzMzk= | 194 | run_classifier.py doesn't save any configurations and I can't load the trained model. | {
"login": "anz2",
"id": 24385276,
"node_id": "MDQ6VXNlcjI0Mzg1Mjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/24385276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anz2",
"html_url": "https://github.com/anz2",
"followers_url": "https://api.github.com/users/anz2/followers",
"following_url": "https://api.github.com/users/anz2/following{/other_user}",
"gists_url": "https://api.github.com/users/anz2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anz2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anz2/subscriptions",
"organizations_url": "https://api.github.com/users/anz2/orgs",
"repos_url": "https://api.github.com/users/anz2/repos",
"events_url": "https://api.github.com/users/anz2/events{/privacy}",
"received_events_url": "https://api.github.com/users/anz2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I trained BertForSequenceClassification model with cola dataset mode for binary classification. It saved only eval_results.txt and pytorch_model.bin files. When I am loading model again like:\r\nmodel = BertForSequenceClassification.from_pretrained('models/') \r\nit produces such error:\r\nwith open(json_file, \"r\", encoding='utf-8') as reader:\r\nFileNotFoundError: [Errno 2] No such file or directory: 'models/bert_config.json'\r\n\r\nI trained models using the command:\r\nexport GLUE_DIR=data_dir_path; python run_classifier.py --task_name cola --do_train --do_eval --data_dir $GLUE_DIR/ --bert_model bert-base-multilingual-cased --max_seq_length 128 --train_batch_size 16 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir models/\r\n\r\nDo I have any error with training script? \r\nHow can I produce such config.json file to load model successfully?",
"You can fetch the configuration from S3 like it's [done in the example](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L560).\r\n\r\nAlternatively, you can save the configuration with:\r\n```python\r\nwith open('config.json', 'w') as f:\r\n f.write(model.config.to_json_string())\r\n```"
] | 1,547 | 1,547 | 1,547 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/194/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/193 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/193/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/193/comments | https://api.github.com/repos/huggingface/transformers/issues/193/events | https://github.com/huggingface/transformers/pull/193 | 398,747,903 | MDExOlB1bGxSZXF1ZXN0MjQ0MzM3MDg0 | 193 | Fix importing unofficial TF models | {
"login": "kkadowa",
"id": 46347328,
"node_id": "MDQ6VXNlcjQ2MzQ3MzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/46347328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kkadowa",
"html_url": "https://github.com/kkadowa",
"followers_url": "https://api.github.com/users/kkadowa/followers",
"following_url": "https://api.github.com/users/kkadowa/following{/other_user}",
"gists_url": "https://api.github.com/users/kkadowa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kkadowa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kkadowa/subscriptions",
"organizations_url": "https://api.github.com/users/kkadowa/orgs",
"repos_url": "https://api.github.com/users/kkadowa/repos",
"events_url": "https://api.github.com/users/kkadowa/events{/privacy}",
"received_events_url": "https://api.github.com/users/kkadowa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks !"
] | 1,547 | 1,596 | 1,547 | CONTRIBUTOR | null | Importing unofficial TF models seems to be working well, at least for me.
This PR resolves #50. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/193/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/193",
"html_url": "https://github.com/huggingface/transformers/pull/193",
"diff_url": "https://github.com/huggingface/transformers/pull/193.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/193.patch",
"merged_at": 1547455442000
} |
https://api.github.com/repos/huggingface/transformers/issues/192 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/192/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/192/comments | https://api.github.com/repos/huggingface/transformers/issues/192/events | https://github.com/huggingface/transformers/pull/192 | 398,671,006 | MDExOlB1bGxSZXF1ZXN0MjQ0Mjg3NDk3 | 192 | Documentation Fixes | {
"login": "bradleymackey",
"id": 11067205,
"node_id": "MDQ6VXNlcjExMDY3MjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/11067205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bradleymackey",
"html_url": "https://github.com/bradleymackey",
"followers_url": "https://api.github.com/users/bradleymackey/followers",
"following_url": "https://api.github.com/users/bradleymackey/following{/other_user}",
"gists_url": "https://api.github.com/users/bradleymackey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bradleymackey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bradleymackey/subscriptions",
"organizations_url": "https://api.github.com/users/bradleymackey/orgs",
"repos_url": "https://api.github.com/users/bradleymackey/repos",
"events_url": "https://api.github.com/users/bradleymackey/events{/privacy}",
"received_events_url": "https://api.github.com/users/bradleymackey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing this as I'm not sure it's right."
] | 1,547 | 1,547 | 1,547 | NONE | null | Fixes misnamed documentation comments in `run_squad.py` and `run_squad2.py` and update of `README.md` for updated syntax of file conversion. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/192/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/192",
"html_url": "https://github.com/huggingface/transformers/pull/192",
"diff_url": "https://github.com/huggingface/transformers/pull/192.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/192.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/191 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/191/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/191/comments | https://api.github.com/repos/huggingface/transformers/issues/191/events | https://github.com/huggingface/transformers/pull/191 | 398,655,399 | MDExOlB1bGxSZXF1ZXN0MjQ0Mjc3Njk3 | 191 | lm_finetuning compatibility with Python 3.5 | {
"login": "kkadowa",
"id": 46347328,
"node_id": "MDQ6VXNlcjQ2MzQ3MzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/46347328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kkadowa",
"html_url": "https://github.com/kkadowa",
"followers_url": "https://api.github.com/users/kkadowa/followers",
"following_url": "https://api.github.com/users/kkadowa/following{/other_user}",
"gists_url": "https://api.github.com/users/kkadowa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kkadowa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kkadowa/subscriptions",
"organizations_url": "https://api.github.com/users/kkadowa/orgs",
"repos_url": "https://api.github.com/users/kkadowa/repos",
"events_url": "https://api.github.com/users/kkadowa/events{/privacy}",
"received_events_url": "https://api.github.com/users/kkadowa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, thank! Nice to have Python 3.5 compatibility again here!"
] | 1,547 | 1,596 | 1,547 | CONTRIBUTOR | null | dicts are not ordered in Python 3.5 or prior, which is a cause of #175.
This PR replaces one with a list, to keep its order. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/191/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/191/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/191",
"html_url": "https://github.com/huggingface/transformers/pull/191",
"diff_url": "https://github.com/huggingface/transformers/pull/191.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/191.patch",
"merged_at": 1547455208000
} |
https://api.github.com/repos/huggingface/transformers/issues/190 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/190/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/190/comments | https://api.github.com/repos/huggingface/transformers/issues/190/events | https://github.com/huggingface/transformers/pull/190 | 398,655,261 | MDExOlB1bGxSZXF1ZXN0MjQ0Mjc3NjA5 | 190 | Fix documentation (missing backslashes) | {
"login": "kkadowa",
"id": 46347328,
"node_id": "MDQ6VXNlcjQ2MzQ3MzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/46347328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kkadowa",
"html_url": "https://github.com/kkadowa",
"followers_url": "https://api.github.com/users/kkadowa/followers",
"following_url": "https://api.github.com/users/kkadowa/following{/other_user}",
"gists_url": "https://api.github.com/users/kkadowa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kkadowa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kkadowa/subscriptions",
"organizations_url": "https://api.github.com/users/kkadowa/orgs",
"repos_url": "https://api.github.com/users/kkadowa/repos",
"events_url": "https://api.github.com/users/kkadowa/events{/privacy}",
"received_events_url": "https://api.github.com/users/kkadowa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, thanks!"
] | 1,547 | 1,596 | 1,547 | CONTRIBUTOR | null | This PR adds missing backslashes in LM Fine-tuning subsection in README.md. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/190/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/190",
"html_url": "https://github.com/huggingface/transformers/pull/190",
"diff_url": "https://github.com/huggingface/transformers/pull/190.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/190.patch",
"merged_at": 1547455144000
} |
https://api.github.com/repos/huggingface/transformers/issues/189 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/189/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/189/comments | https://api.github.com/repos/huggingface/transformers/issues/189/events | https://github.com/huggingface/transformers/pull/189 | 398,649,768 | MDExOlB1bGxSZXF1ZXN0MjQ0Mjc0MTcz | 189 | [bug fix] args.do_lower_case is always True | {
"login": "donglixp",
"id": 1070872,
"node_id": "MDQ6VXNlcjEwNzA4NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1070872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donglixp",
"html_url": "https://github.com/donglixp",
"followers_url": "https://api.github.com/users/donglixp/followers",
"following_url": "https://api.github.com/users/donglixp/following{/other_user}",
"gists_url": "https://api.github.com/users/donglixp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donglixp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donglixp/subscriptions",
"organizations_url": "https://api.github.com/users/donglixp/orgs",
"repos_url": "https://api.github.com/users/donglixp/repos",
"events_url": "https://api.github.com/users/donglixp/events{/privacy}",
"received_events_url": "https://api.github.com/users/donglixp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @donglixp!"
] | 1,547 | 1,547 | 1,547 | CONTRIBUTOR | null | The "default=True" makes args.do_lower_case always True.
```python
parser.add_argument("--do_lower_case",
default=True,
action='store_true')
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/189/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/189",
"html_url": "https://github.com/huggingface/transformers/pull/189",
"diff_url": "https://github.com/huggingface/transformers/pull/189.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/189.patch",
"merged_at": 1547455118000
} |
https://api.github.com/repos/huggingface/transformers/issues/188 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/188/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/188/comments | https://api.github.com/repos/huggingface/transformers/issues/188/events | https://github.com/huggingface/transformers/issues/188 | 398,588,638 | MDU6SXNzdWUzOTg1ODg2Mzg= | 188 | Weight Decay Fix Original Paper | {
"login": "PetrochukM",
"id": 7424737,
"node_id": "MDQ6VXNlcjc0MjQ3Mzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7424737?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PetrochukM",
"html_url": "https://github.com/PetrochukM",
"followers_url": "https://api.github.com/users/PetrochukM/followers",
"following_url": "https://api.github.com/users/PetrochukM/following{/other_user}",
"gists_url": "https://api.github.com/users/PetrochukM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PetrochukM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PetrochukM/subscriptions",
"organizations_url": "https://api.github.com/users/PetrochukM/orgs",
"repos_url": "https://api.github.com/users/PetrochukM/repos",
"events_url": "https://api.github.com/users/PetrochukM/events{/privacy}",
"received_events_url": "https://api.github.com/users/PetrochukM/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes"
] | 1,547 | 1,547 | 1,547 | NONE | null | Hi There!
Is the weight decay fix from?
https://arxiv.org/abs/1711.05101
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/188/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/187 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/187/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/187/comments | https://api.github.com/repos/huggingface/transformers/issues/187/events | https://github.com/huggingface/transformers/issues/187 | 398,252,066 | MDU6SXNzdWUzOTgyNTIwNjY= | 187 | issue is, that ##string will repeats at intermediate, it collapses all index for mask words | {
"login": "MuruganR96",
"id": 35978784,
"node_id": "MDQ6VXNlcjM1OTc4Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/35978784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MuruganR96",
"html_url": "https://github.com/MuruganR96",
"followers_url": "https://api.github.com/users/MuruganR96/followers",
"following_url": "https://api.github.com/users/MuruganR96/following{/other_user}",
"gists_url": "https://api.github.com/users/MuruganR96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MuruganR96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MuruganR96/subscriptions",
"organizations_url": "https://api.github.com/users/MuruganR96/orgs",
"repos_url": "https://api.github.com/users/MuruganR96/repos",
"events_url": "https://api.github.com/users/MuruganR96/events{/privacy}",
"received_events_url": "https://api.github.com/users/MuruganR96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I don't think you can do that in a clean way, sorry. That how BERT is trained.",
"> That how BERT is trained\r\n\r\ni was pretrained our **bert-base-uncased** model with own dataset. \r\nBatch_size=32\r\nmax_seq_length=128\r\n\r\n> I don't think you can do that in a clean way\r\n\r\nyou asked me \"what was the problem now?\" i think so.\r\nnormal word (ex: 'cadd') splits into multiple ##string (ex: 'cad', '##d').\r\nit was affected my mask word output.(like index dynamically changed)\r\n\r\nstep 1: tokenize \r\ntokenized_text = tokenizer.tokenize(source_transcript)\r\n\r\nstep 2:replace masked word as \"[MASK]\" in index position\r\ntokenized_text[index] ='[MASK]'\r\n\r\nstep 3:predicting mask word\r\npredicted_index = torch.argmax(predictions[0, masked_index]).item()\r\npredicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]\r\n\r\nex:\r\noriginal text:how to apply for **cadd** (['how', 'to', 'apply', 'for', 'cadd'])\r\nmasked word: cadd\r\nindex:4\r\n\r\nstep 1:\r\n['how', 'le', 'comply', 'for', 'cad', '##d']\r\nnote: index length increased\r\n\r\nstep 2:\r\n['how', 'to', 'apply', 'for', '[MASK]', '##d']\r\nnote: '[MASK]' index increased\r\n\r\nstep 3:\r\nresult: ['sb']\r\nhow to apply for **sb d**\r\n\r\nactually i was pretrained model for this as \"how to apply for card\"\r\nbut it was not predicting well.\r\n\r\nMain issue is, that ##string will repeats at intermediate, it collapses all index for mask words.\r\n\r\nAnd this **Two to Three mask word prediction at the same sentence also very complex**.\r\n \r\n`Two to Three mask word prediction at the same sentence also very complex\r\n`\r\n\r\nhow to solve this problem @thomwolf sir",
"> > That how BERT is trained\r\n> \r\n> i was pretrained our **bert-base-uncased** model with own dataset.\r\n> Batch_size=32\r\n> max_seq_length=128\r\n> \r\n> > I don't think you can do that in a clean way\r\n> \r\n> you asked me \"what was the problem now?\" i think so.\r\n> normal word (ex: 'cadd') splits into multiple ##string (ex: 'cad', '##d').\r\n> it was affected my mask word output.(like index dynamically changed)\r\n> \r\n> step 1: tokenize\r\n> tokenized_text = tokenizer.tokenize(source_transcript)\r\n> \r\n> step 2:replace masked word as \"[MASK]\" in index position\r\n> tokenized_text[index] ='[MASK]'\r\n> \r\n> step 3:predicting mask word\r\n> predicted_index = torch.argmax(predictions[0, masked_index]).item()\r\n> predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]\r\n> \r\n> ex:\r\n> original text:how to apply for **cadd** (['how', 'to', 'apply', 'for', 'cadd'])\r\n> masked word: cadd\r\n> index:4\r\n> \r\n> step 1:\r\n> ['how', 'le', 'comply', 'for', 'cad', '##d']\r\n> note: index length increased\r\n> \r\n> step 2:\r\n> ['how', 'to', 'apply', 'for', '[MASK]', '##d']\r\n> note: '[MASK]' index increased\r\n> \r\n> step 3:\r\n> result: ['sb']\r\n> how to apply for **sb d**\r\n> \r\n> actually i was pretrained model for this as \"how to apply for card\"\r\n> but it was not predicting well.\r\n> \r\n> Main issue is, that ##string will repeats at intermediate, it collapses all index for mask words.\r\n> \r\n> And this **Two to Three mask word prediction at the same sentence also very complex**.\r\n> \r\n> `Two to Three mask word prediction at the same sentence also very complex `\r\n> \r\n> how to solve this problem @thomwolf sir\r\n\r\nsir @thomwolf any suggestions.\r\n\r\nThanks."
] | 1,547 | 1,547 | 1,547 | NONE | null | ```
----------------------------------> how much belan i havin my credit card and also debitcard
----------------------------------> ['how', 'much', 'belan', 'i', 'havin', 'my', 'credit', 'card', 'and', 'also', 'debitcard']
----------------------------------> ['**belan**', '**havin**']
----------------------------------> [2, 4]
----------------------------------> ['how', 'much', '**belan**', 'i', '**havin**', 'my', 'credit', 'card', 'and', 'also', 'debitcard']
----------------------------------> how much belan i havin my credit card and also debitcard
before_tokenized_text-------------> ['how', 'much', **'bela'**, **'##n'**, 'i', **'ha'**, **'##vin'**, 'my', 'credit', 'card', 'and', 'also', '**de'**, **'##bit',** '**##card']**
index_useless---------------------> [2, 4]
after_tokenized_text--------------> ['how', 'much', '[MASK]', '##n', '[MASK]', 'ha', '##vin', 'my', 'credit', 'card', 'and', 'also', 'de', '##bit', '##card']
########## ['more', 'most']
########## 2 <---------index_useless_length
########## 2 <---------predicted_words_len
########## how much [MASK] n [MASK] ha vin my credit card and also de bit card <---------tokenized_text
########## index_tk_aft [2, 4]
########## how much more n most ha vin my credit card and also de bit card
########## how much more n most ha vin my credit card and also de bit card <---------Result
```
i think As you understood. that spelling mistake words [2, 4] as Masking to predict.
but in this place, what happened,
##string -> '##n' , '##vin', like this spoil the predict final output.
i found and try so many ways. but all useless still.
**how to predict and fetch two more masking words?**
Thanks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/187/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/186 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/186/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/186/comments | https://api.github.com/repos/huggingface/transformers/issues/186/events | https://github.com/huggingface/transformers/issues/186 | 398,229,727 | MDU6SXNzdWUzOTgyMjk3Mjc= | 186 | BertOnlyMLMHead is a duplicate of BertLMPredictionHead | {
"login": "artemisart",
"id": 9201969,
"node_id": "MDQ6VXNlcjkyMDE5Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9201969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/artemisart",
"html_url": "https://github.com/artemisart",
"followers_url": "https://api.github.com/users/artemisart/followers",
"following_url": "https://api.github.com/users/artemisart/following{/other_user}",
"gists_url": "https://api.github.com/users/artemisart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/artemisart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/artemisart/subscriptions",
"organizations_url": "https://api.github.com/users/artemisart/orgs",
"repos_url": "https://api.github.com/users/artemisart/repos",
"events_url": "https://api.github.com/users/artemisart/events{/privacy}",
"received_events_url": "https://api.github.com/users/artemisart/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That's an heritage of how I converted the TF code (by reproducing the scope architecture in TF with PyTorch classes). We can't really change that now without re-converting all the TF code.\r\nIf you want a more concise version of PyTorch BERT, you can check [pytorchic-bert](https://github.com/dhlee347/pytorchic-bert)."
] | 1,547 | 1,547 | 1,547 | NONE | null | https://github.com/huggingface/pytorch-pretrained-BERT/blob/35becc6d84f620c3da48db460d6fb900f2451782/pytorch_pretrained_bert/modeling.py#L387-L394
I don't understand how it is useful to wrap the BertLMPredictionHead class like that, perhaps it was forgotten in some refactoring ? I can do a PR if you confirm me it can be replaced.
BertOnlyMLMHead is only used in BertForMaskedLM. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/186/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/185 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/185/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/185/comments | https://api.github.com/repos/huggingface/transformers/issues/185/events | https://github.com/huggingface/transformers/issues/185 | 398,218,741 | MDU6SXNzdWUzOTgyMTg3NDE= | 185 | got an unexpected keyword argument 'cache_dir' | {
"login": "chiyuzhang94",
"id": 33407613,
"node_id": "MDQ6VXNlcjMzNDA3NjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/33407613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiyuzhang94",
"html_url": "https://github.com/chiyuzhang94",
"followers_url": "https://api.github.com/users/chiyuzhang94/followers",
"following_url": "https://api.github.com/users/chiyuzhang94/following{/other_user}",
"gists_url": "https://api.github.com/users/chiyuzhang94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chiyuzhang94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chiyuzhang94/subscriptions",
"organizations_url": "https://api.github.com/users/chiyuzhang94/orgs",
"repos_url": "https://api.github.com/users/chiyuzhang94/repos",
"events_url": "https://api.github.com/users/chiyuzhang94/events{/privacy}",
"received_events_url": "https://api.github.com/users/chiyuzhang94/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should update to the latest version of `pytorch_pretrained_bert`(`pip install pytorch_pretrained_bert --upgrade`)"
] | 1,547 | 1,592 | 1,547 | NONE | null | I used the following code to run the job: `
export GLUE_DIR=./data
python3 run_classifier.py \
--task_name COLA \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/ \
--bert_model bert-large-uncased \
--max_seq_length 20 \
--train_batch_size 10 \
--learning_rate 2e-5 \
--num_train_epochs 2.0 \
--output_dir ./output`
Then, I got the output:
`01/11/2019 02:02:55 - INFO - __main__ - device: cpu n_gpu: 0, distributed training: False, 16-bits training: False
01/11/2019 02:02:56 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-vocab.txt from cache at /Users/chiyuzhang/.pytorch_pretrained_bert/9b3c03a36e83b13d5ba95ac965c9f9074a99e14340c523ab405703179e79fc46.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
01/11/2019 02:02:56 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased.tar.gz from cache at /Users/chiyuzhang/.pytorch_pretrained_bert/214d4777e8e3eb234563136cd3a49f6bc34131de836848454373fa43f10adc5e.abfbb80ee795a608acbf35c7bf2d2d58574df3887cdd94b355fc67e03fddba05
01/11/2019 02:02:56 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /Users/chiyuzhang/.pytorch_pretrained_bert/214d4777e8e3eb234563136cd3a49f6bc34131de836848454373fa43f10adc5e.abfbb80ee795a608acbf35c7bf2d2d58574df3887cdd94b355fc67e03fddba05 to temp dir /var/folders/j0/_kd2ppm53wnb6pjypwy3gc_00000gn/T/tmpynwe_15z
01/11/2019 02:03:06 - INFO - pytorch_pretrained_bert.modeling - Model config {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"initializer_range": 0.02,
"intermediate_size": 4096,
"max_position_embeddings": 512,
"num_attention_heads": 16,
"num_hidden_layers": 24,
"type_vocab_size": 2,
"vocab_size": 30522
}
Traceback (most recent call last):
File "run_classifier.py", line 619, in <module>
main()
File "run_classifier.py", line 455, in main
num_labels = 2)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 502, in from_pretrained
model = cls(config, *inputs, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'cache_dir'`
Could you please help me to fix this problem? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/185/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/184 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/184/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/184/comments | https://api.github.com/repos/huggingface/transformers/issues/184/events | https://github.com/huggingface/transformers/issues/184 | 398,208,606 | MDU6SXNzdWUzOTgyMDg2MDY= | 184 | Python 3.5 + Torch 1.0 does not work | {
"login": "yuhui-zh15",
"id": 17669473,
"node_id": "MDQ6VXNlcjE3NjY5NDcz",
"avatar_url": "https://avatars.githubusercontent.com/u/17669473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuhui-zh15",
"html_url": "https://github.com/yuhui-zh15",
"followers_url": "https://api.github.com/users/yuhui-zh15/followers",
"following_url": "https://api.github.com/users/yuhui-zh15/following{/other_user}",
"gists_url": "https://api.github.com/users/yuhui-zh15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuhui-zh15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuhui-zh15/subscriptions",
"organizations_url": "https://api.github.com/users/yuhui-zh15/orgs",
"repos_url": "https://api.github.com/users/yuhui-zh15/repos",
"events_url": "https://api.github.com/users/yuhui-zh15/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuhui-zh15/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you @yuhui-zh15 sir. i will check.",
"This should be fixed on master now (thanks to #191 )"
] | 1,547 | 1,547 | 1,547 | NONE | null | When running `run_lm_finetuning.py` to fine-tune language model with default settings (see command below), sometimes I could run successfully, but sometimes I received different errors like `RuntimeError: The size of tensor a must match the size of tensor b at non-singleton dimension 1`, `RuntimeError: Creating MTGP constants failed. at /pytorch/aten/src/THC/THCTensorRandom.cu:35` or `RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)`. This problem can be solved when updating `python3.5` to `python3.6`.
```
python run_lm_finetuning.py \
--bert_model ~/bert/models/bert-base-uncased/ \
--do_train \
--train_file ~/bert/codes/samples/sample_text.txt \
--output_dir ~/bert/exp/lm \
--num_train_epochs 5.0 \
--learning_rate 3e-5 \
--train_batch_size 32 \
--max_seq_length 128 \
--on_memory
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/184/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/184/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/183 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/183/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/183/comments | https://api.github.com/repos/huggingface/transformers/issues/183/events | https://github.com/huggingface/transformers/pull/183 | 398,173,731 | MDExOlB1bGxSZXF1ZXN0MjQzOTM4OTc1 | 183 | Adding OpenAI GPT pre-trained model | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"#254 is now the main PR for the inclusion of OpenAI GPT. Closing this PR."
] | 1,547 | 1,549 | 1,549 | MEMBER | null | Adding OpenAI GPT pretrained model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/183/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/183/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/183",
"html_url": "https://github.com/huggingface/transformers/pull/183",
"diff_url": "https://github.com/huggingface/transformers/pull/183.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/183.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/182 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/182/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/182/comments | https://api.github.com/repos/huggingface/transformers/issues/182/events | https://github.com/huggingface/transformers/pull/182 | 398,166,198 | MDExOlB1bGxSZXF1ZXN0MjQzOTMzOTkz | 182 | add do_lower_case arg and adjust model saving for lm finetuning. | {
"login": "tholor",
"id": 1563902,
"node_id": "MDQ6VXNlcjE1NjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tholor",
"html_url": "https://github.com/tholor",
"followers_url": "https://api.github.com/users/tholor/followers",
"following_url": "https://api.github.com/users/tholor/following{/other_user}",
"gists_url": "https://api.github.com/users/tholor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholor/subscriptions",
"organizations_url": "https://api.github.com/users/tholor/orgs",
"repos_url": "https://api.github.com/users/tholor/repos",
"events_url": "https://api.github.com/users/tholor/events{/privacy}",
"received_events_url": "https://api.github.com/users/tholor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,547 | 1,547 | 1,547 | CONTRIBUTOR | null | Fixes for #177 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/182/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/182",
"html_url": "https://github.com/huggingface/transformers/pull/182",
"diff_url": "https://github.com/huggingface/transformers/pull/182.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/182.patch",
"merged_at": 1547193013000
} |
https://api.github.com/repos/huggingface/transformers/issues/181 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/181/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/181/comments | https://api.github.com/repos/huggingface/transformers/issues/181/events | https://github.com/huggingface/transformers/issues/181 | 398,148,589 | MDU6SXNzdWUzOTgxNDg1ODk= | 181 | All about the training speed in classification job | {
"login": "zhusleep",
"id": 17355556,
"node_id": "MDQ6VXNlcjE3MzU1NTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/17355556?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhusleep",
"html_url": "https://github.com/zhusleep",
"followers_url": "https://api.github.com/users/zhusleep/followers",
"following_url": "https://api.github.com/users/zhusleep/following{/other_user}",
"gists_url": "https://api.github.com/users/zhusleep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhusleep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhusleep/subscriptions",
"organizations_url": "https://api.github.com/users/zhusleep/orgs",
"repos_url": "https://api.github.com/users/zhusleep/repos",
"events_url": "https://api.github.com/users/zhusleep/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhusleep/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe try to use a bigger batch size or try fp16 training?\r\nPlease refer to the [detailed instructions in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#examples)."
] | 1,547 | 1,547 | 1,547 | NONE | null | I run the bert-base-uncased model with task 'mrpc' in ubuntu,nvidia p4000 8G.
It's a classification problem, and I use the default demo data.
But the training speed is about 2 batch every second. Any problem?
I think it maybe too slow, but can not find why. I have another task with 1300000 data costs 6 hours per epoch. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/181/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/180 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/180/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/180/comments | https://api.github.com/repos/huggingface/transformers/issues/180/events | https://github.com/huggingface/transformers/issues/180 | 398,143,878 | MDU6SXNzdWUzOTgxNDM4Nzg= | 180 | Weights not initialized from pretrained model | {
"login": "lemonhu",
"id": 22219073,
"node_id": "MDQ6VXNlcjIyMjE5MDcz",
"avatar_url": "https://avatars.githubusercontent.com/u/22219073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lemonhu",
"html_url": "https://github.com/lemonhu",
"followers_url": "https://api.github.com/users/lemonhu/followers",
"following_url": "https://api.github.com/users/lemonhu/following{/other_user}",
"gists_url": "https://api.github.com/users/lemonhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lemonhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lemonhu/subscriptions",
"organizations_url": "https://api.github.com/users/lemonhu/orgs",
"repos_url": "https://api.github.com/users/lemonhu/repos",
"events_url": "https://api.github.com/users/lemonhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/lemonhu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi!\r\n\r\nThose messages are correct, the pretrained weights that have been released by Google Brain are just the ones of the core network. They did not release task specific weights. To get a model that solves a specific classification task, you would have to train one yourself or get it from someone else.\r\n\r\n@thomwolf There have been multiple issues about this specific behavior, maybe we should add some kind of text either as a print while loading the model or in the documentation. I would be happy to do it. What would you prefer?",
"Oh, I see, I will train the model with my own dataset, thank you for your answer.",
"Yes you are right @rodgzilla we should detail a bit the messages in [modeling.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L545) to say that `These weights will be trained from scratch`."
] | 1,547 | 1,547 | 1,547 | NONE | null | Thanks for your awesome work!
When I execute the following code for a named entity recognition tasks:
`model = BertForTokenClassification.from_pretrained("bert-base-uncased", num_labels=num_labels)`
Output the following information:
> Weights of BertForTokenClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
Weights from pretrained model not used in BertForTokenClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
What puzzles me is that the parameters of the classifier are not initialized. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/180/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/180/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/179 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/179/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/179/comments | https://api.github.com/repos/huggingface/transformers/issues/179/events | https://github.com/huggingface/transformers/pull/179 | 397,817,028 | MDExOlB1bGxSZXF1ZXN0MjQzNjc1ODgw | 179 | Fix it to run properly even if without `--do_train` param. | {
"login": "likejazz",
"id": 1250095,
"node_id": "MDQ6VXNlcjEyNTAwOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1250095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/likejazz",
"html_url": "https://github.com/likejazz",
"followers_url": "https://api.github.com/users/likejazz/followers",
"following_url": "https://api.github.com/users/likejazz/following{/other_user}",
"gists_url": "https://api.github.com/users/likejazz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/likejazz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/likejazz/subscriptions",
"organizations_url": "https://api.github.com/users/likejazz/orgs",
"repos_url": "https://api.github.com/users/likejazz/repos",
"events_url": "https://api.github.com/users/likejazz/events{/privacy}",
"received_events_url": "https://api.github.com/users/likejazz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,547 | 1,547 | 1,547 | CONTRIBUTOR | null | It was modified similar to `run_classifier.py`, and Fixed to run properly even if without `--do_train` param. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/179/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/179",
"html_url": "https://github.com/huggingface/transformers/pull/179",
"diff_url": "https://github.com/huggingface/transformers/pull/179.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/179.patch",
"merged_at": 1547159951000
} |
https://api.github.com/repos/huggingface/transformers/issues/178 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/178/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/178/comments | https://api.github.com/repos/huggingface/transformers/issues/178/events | https://github.com/huggingface/transformers/issues/178 | 397,703,107 | MDU6SXNzdWUzOTc3MDMxMDc= | 178 | Can we use BERT for Punctuation Prediction? | {
"login": "dalonlobo",
"id": 12654849,
"node_id": "MDQ6VXNlcjEyNjU0ODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/12654849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dalonlobo",
"html_url": "https://github.com/dalonlobo",
"followers_url": "https://api.github.com/users/dalonlobo/followers",
"following_url": "https://api.github.com/users/dalonlobo/following{/other_user}",
"gists_url": "https://api.github.com/users/dalonlobo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dalonlobo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dalonlobo/subscriptions",
"organizations_url": "https://api.github.com/users/dalonlobo/orgs",
"repos_url": "https://api.github.com/users/dalonlobo/repos",
"events_url": "https://api.github.com/users/dalonlobo/events{/privacy}",
"received_events_url": "https://api.github.com/users/dalonlobo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I don't really now. I guess you should just give it a try."
] | 1,547 | 1,547 | 1,547 | NONE | null | Can we use the pre-trained BERT model for Punctuation Prediction for Conversational Speech? Let say punctuating an ASR output? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/178/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/177 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/177/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/177/comments | https://api.github.com/repos/huggingface/transformers/issues/177/events | https://github.com/huggingface/transformers/issues/177 | 397,673,308 | MDU6SXNzdWUzOTc2NzMzMDg= | 177 | run_lm_finetuning.py does not define a do_lower_case argument | {
"login": "nikitakit",
"id": 252225,
"node_id": "MDQ6VXNlcjI1MjIyNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/252225?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikitakit",
"html_url": "https://github.com/nikitakit",
"followers_url": "https://api.github.com/users/nikitakit/followers",
"following_url": "https://api.github.com/users/nikitakit/following{/other_user}",
"gists_url": "https://api.github.com/users/nikitakit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikitakit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikitakit/subscriptions",
"organizations_url": "https://api.github.com/users/nikitakit/orgs",
"repos_url": "https://api.github.com/users/nikitakit/repos",
"events_url": "https://api.github.com/users/nikitakit/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikitakit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"On a related note: I see there is learning rate scheduling happening [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L608), but also inside the BertAdam class. Is this not redundant and erroneous? For reference I'm not using FP16 training, which has its own separate optimizer that doesn't appear to perform redundant learning rate scheduling.\r\n\r\nThe same is true for other examples such as SQuAD (maybe it's the cause of #168, where results were reproduced only when using float16 training?)",
"Here also @tholor, maybe you have some feedback from using the fine-tuning script?",
"I figured out why I was seeing such poor results while attempting to fine-tune: the example saves `model.bert` instead of `model` to `pytorch_model.bin`, so the resulting file can't just be zipped up and loaded with `from_pretrained`.",
"I have just fixed the `do_lower_case` bug and adjusted the code for model saving to be in line with the other examples (see #182 ). I hope this solves your issue. Thanks for reporting!\r\n\r\n> As an aside, has anyone successfully applied LM fine-tuning for a downstream task (using this code, or maybe using the original tensorflow implementation)?\r\n\r\nWe are currently using a fine-tuned model for a rather technical corpus and see improvements in terms of the extracted document embeddings in contrast to the original pre-trained BERT. However, we haven't done intense testing of hyperparameters or performance comparisons with the original pre-trained model yet. This is all still work in progress on our side. If you have results that you can share in public, I would be interested to see the difference you achieve. In general, I would only expect improvements for target corpora that have a very different language style than Wiki/OpenBooks.\r\n\r\n> On a related note: I see there is learning rate scheduling happening here, but also inside the BertAdam class.\r\n\r\nWe have only trained with fp16 so far. @thomwolf have you experienced issues with LR scheduling in the other examples? Just copied the code from there.",
"Thanks for fixing these!\r\n\r\nAfter addressing the save/load mismatch I'm seeing downstream performance comparable to using pre-trained BERT. I just got a big scare when the default log level wasn't high enough to notify me that weights were being randomly re-initialized instead of loaded from the file I specified. It's still too early for me to tell if there are actual *benefits* to fine-tuning, though.",
"All this looks fine on master now. Please open a new issue (or re-open this one) if there are other issues.",
"I saw on https://github.com/huggingface/pytorch-pretrained-BERT/issues/126#issuecomment-451910577 that there's potentially some documentation effort underway beyond the README. Thanks a lot for this!\r\n\r\nI wonder if there's the possibility to add more detail about how to properly prepare a custom corpus (e.g. to avoid catastrophical forgetting) finetune the models on. Asking this as my (few, so far) attempts to finetune on other corpora have been destructive for performance on GLUE tasks when compared to the original models (I just discovered this issue, maybe the things you mention above affected me too).\r\n\r\nKudos @thomwolf @tholor for all your work on this!"
] | 1,547 | 1,547 | 1,547 | NONE | null | The file references `args.do_lower_case`, but doesn't have the corresponding `parser.add_argument` call.
As an aside, has anyone successfully applied LM fine-tuning for a downstream task (using this code, or maybe using the original tensorflow implementation)? I'm not even sure if the code will run in its current state. And after fixing this issue locally, I've had no luck using the output from fine-tuning: I have a model that gets state-of-the-art results when using pre-trained BERT, but after fine-tuning it performs no better than omitting BERT/pre-training entirely! I don't know whether to suspect that there are might be other bugs in the example code, or if the hyperparameters in the README are just a very poor starting point for what I'm doing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/177/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/177/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/176 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/176/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/176/comments | https://api.github.com/repos/huggingface/transformers/issues/176/events | https://github.com/huggingface/transformers/issues/176 | 397,286,604 | MDU6SXNzdWUzOTcyODY2MDQ= | 176 | Add [CLS] and [SEP] tokens in Usage | {
"login": "tomohideshibata",
"id": 16042472,
"node_id": "MDQ6VXNlcjE2MDQyNDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/16042472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomohideshibata",
"html_url": "https://github.com/tomohideshibata",
"followers_url": "https://api.github.com/users/tomohideshibata/followers",
"following_url": "https://api.github.com/users/tomohideshibata/following{/other_user}",
"gists_url": "https://api.github.com/users/tomohideshibata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomohideshibata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomohideshibata/subscriptions",
"organizations_url": "https://api.github.com/users/tomohideshibata/orgs",
"repos_url": "https://api.github.com/users/tomohideshibata/repos",
"events_url": "https://api.github.com/users/tomohideshibata/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomohideshibata/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You are right, I'll fix the readme",
"So, just to clarify, I should add '[CLS]' and '[SEP]' to the beginning and end of each utterance respectively, and it's a bug in the examples that they dont do this?",
"@hughperkins did you get any clarification on this?"
] | 1,547 | 1,566 | 1,547 | CONTRIBUTOR | null | Thank you for this great job.
In the Usage section, the `[CLS]` and `[SEP]` tokens should be added in the beginning and ending of `tokenized_text`?
```
# Tokenized input
text = "Who was Jim Henson ? Jim Henson was a puppeteer"
tokenized_text = tokenizer.tokenize(text)
```
In the current example, if the first token is masked (this position should be reserved for `[CLS]`), the result will be strange.
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/176/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/175 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/175/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/175/comments | https://api.github.com/repos/huggingface/transformers/issues/175/events | https://github.com/huggingface/transformers/issues/175 | 397,243,635 | MDU6SXNzdWUzOTcyNDM2MzU= | 175 | RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1) | {
"login": "MuruganR96",
"id": 35978784,
"node_id": "MDQ6VXNlcjM1OTc4Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/35978784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MuruganR96",
"html_url": "https://github.com/MuruganR96",
"followers_url": "https://api.github.com/users/MuruganR96/followers",
"following_url": "https://api.github.com/users/MuruganR96/following{/other_user}",
"gists_url": "https://api.github.com/users/MuruganR96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MuruganR96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MuruganR96/subscriptions",
"organizations_url": "https://api.github.com/users/MuruganR96/orgs",
"repos_url": "https://api.github.com/users/MuruganR96/repos",
"events_url": "https://api.github.com/users/MuruganR96/events{/privacy}",
"received_events_url": "https://api.github.com/users/MuruganR96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sir how to resolve this? i am beginner for pytorch. \r\nThanks.",
"I will have a look, I am not familiar with `run_lm_finetuning` yet.\r\nIn the meantime maybe @tholor has an advice?",
"Haven't seen this error before, but how does your training corpus \"vocab007.txt\" look like? Is training working successfully for the file \"samples/sample_text.txt\"?",
"Sir my vocab007.txt is a my own text sentences same as samples/sample_text.txt. but I won't tested before this sample _text.txt. directly i am put training to my vocab007.txt\r\nThank you so much @thomwolf @tholor sir. ",
"Not sure if I understood your last message. Is this solved? ",
"\r\n@tholor sir, stil now i am not solving this issue. \r\n\r\n> Is training working successfully for the file \"samples/sample_text.txt\"?\r\n\r\n\r\nNo, i am not train the file \"samples/sample_text.txt\"\r\n\r\n> how does your training corpus \"vocab007.txt\" look like? \r\n\r\n\r\nthis is line by line sentence like file \"samples/sample_text.txt\"\r\n\r\nsir i think this shape issue. batch vice split datas for multi gpu. that time this issue occurred. \r\n\r\nsir any suggestion? how to resolve is bug.\r\n\r\nthanks.",
"I cannot reproduce your error. Just tested again with a dummy corpus (referenced in the readme) on a 4x P100 machine using the same parameters as you:\r\n```\r\npython3 run_lm_finetuning.py --bert_model bert-base-uncased --do_train --train_file ../samples/small_wiki_sentence_corpus.txt --output_dir models --num_train_epochs 1.0 --learning_rate 3e-5 --train_batch_size 32 --max_seq_length 128\r\n```\r\n\r\nTraining just started normally. I would recommend to: \r\n1) Check your local setup and try to run with the above corpus (download here). If this doesn't work, there's something wrong with your setup (e.g. CUDA)\r\n2) If 1 works, examine your training corpus \"vocab007.txt\". I suppose there's something wrong here causing wrong `input_ids`. A good starting point will be the logs that you see in the beginning of model training and print some training examples (including `input_ids`). They should look something like this: \r\n\r\n```\r\n01/11/2019 08:20:24 - INFO - __main__ - ***** Running training *****\r\n01/11/2019 08:20:24 - INFO - __main__ - Num examples = 476462\r\n01/11/2019 08:20:24 - INFO - __main__ - Batch size = 32\r\n01/11/2019 08:20:24 - INFO - __main__ - Num steps = 14889\r\nEpoch: 0%| | 0/1 [00:00<?, ?it/s]\r\n01/11/2019 08:20:24 - INFO - __main__ - *** Example *** | 0/14890 [00:00<?, ?it/s]\r\n01/11/2019 08:20:24 - INFO - __main__ - guid: 0\r\n01/11/2019 08:20:24 - INFO - __main__ - tokens: [CLS] [MASK] jp ##g mini ##at ##ur [MASK] [UNK] [MASK] eine ##m [MASK] [UNK] mit [UNK] . [SEP] [UNK] [UNK] [MASK] jp ##g mini ##at ##ur [UNK] [MASK] [MASK] ##t eine [UNK]\r\n in [UNK] [UNK] - [UNK] [UNK] [UNK] [UNK] in [UNK] [SEP]\r\n01/11/2019 08:20:24 - INFO - __main__ - input_ids: 101 103 16545 2290 7163 4017 3126 103 100 103 27665 2213 103 100 10210 100 1012 102 100 100 103 16545 2290 7163 4017 3126 100 103 103 2102 27665 100 1999 100 100 1011 \r\n100 100 100 100 1999 100 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\r\n01/11/2019 08:20:24 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 \r\n0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\r\n01/11/2019 08:20:24 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\r\n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\r\n01/11/2019 08:20:24 - INFO - __main__ - LM label: [-1, 1012, -1, -1, -1, -1, -1, 100, -1, 1999, -1, -1, 100, -1, -1, -1, -1, -1, -1, -1, 1012, -1, -1, -1, -1, -1, -1, 3413, 3771, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,\r\n -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,\r\n -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]\r\n01/11/2019 08:20:24 - INFO - __main__ - Is next sentence label: 0\r\n```",
"> Check your local setup and try to run with the above corpus \r\n\r\nok @tholor sir, now i will check.",
"Hi, it can be solved by using `python3.6`.\r\n\r\nSee #184 ",
"Thanks sir.",
"Fixed on master now (compatible with Python 3.5 again)"
] | 1,547 | 1,547 | 1,547 | NONE | null | sir i was pretrained for our BERT-Base model for Multi-GPU training 8 GPUs. preprocessing succeed but next step training it shown error. in run_lm_finetuning.py.
--
`python3 run_lm_finetuning.py --bert_model bert-base-uncased --do_train --train_file vocab007.txt --output_dir models --num_train_epochs 5.0 --learning_rate 3e-5 --train_batch_size 32 --max_seq_length 128 `
```
Traceback (most recent call last):
File "run_lm_finetuning.py", line 646, in <module>
main()
File "run_lm_finetuning.py", line 594, in main
loss = model(input_ids, segment_ids, input_mask, lm_label_ids, is_next)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 143, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 153, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply
raise output
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker
output = module(*input, **kwargs)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/pytorch_pretrained_bert/modeling.py", line 695, in forward
output_all_encoded_layers=False)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/pytorch_pretrained_bert/modeling.py", line 626, in forward
embedding_output = self.embeddings(input_ids, token_type_ids)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/pytorch_pretrained_bert/modeling.py", line 187, in forward
seq_length = input_ids.size(1)
RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
```
Thanks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/175/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/174 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/174/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/174/comments | https://api.github.com/repos/huggingface/transformers/issues/174/events | https://github.com/huggingface/transformers/pull/174 | 397,138,625 | MDExOlB1bGxSZXF1ZXN0MjQzMTU2ODg5 | 174 | Added Squad 2.0 | {
"login": "abeljim",
"id": 34782317,
"node_id": "MDQ6VXNlcjM0NzgyMzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/34782317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abeljim",
"html_url": "https://github.com/abeljim",
"followers_url": "https://api.github.com/users/abeljim/followers",
"following_url": "https://api.github.com/users/abeljim/following{/other_user}",
"gists_url": "https://api.github.com/users/abeljim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abeljim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abeljim/subscriptions",
"organizations_url": "https://api.github.com/users/abeljim/orgs",
"repos_url": "https://api.github.com/users/abeljim/repos",
"events_url": "https://api.github.com/users/abeljim/events{/privacy}",
"received_events_url": "https://api.github.com/users/abeljim/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, thanks @abeljim.\r\nDo you have the associated results when you run this command?",
"python run_squad2.py \\\r\n --bert_model bert-large-uncased \\\r\n --do_train \\\r\n --do_predict \\\r\n --do_lower_case \\\r\n --train_file $SQUAD_DIR/train-v2.0.json \\\r\n --predict_file $SQUAD_DIR/dev-v2.0.json \\\r\n --train_batch_size 24 \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2.0 \\\r\n --max_seq_length 384 \\\r\n --doc_stride 128 \\\r\n --output_dir ./debug_squad/ \\\r\n --fp16 \\\r\n --loss_scale 128 \\\r\n --null_score_diff_threshold -2.6929588317871094\r\n\r\n{\r\n \"exact\": 76.62764255032427,\r\n \"f1\": 79.22523967450329,\r\n \"total\": 11873,\r\n \"HasAns_exact\": 68.31983805668017,\r\n \"HasAns_f1\": 73.52248155455082,\r\n \"HasAns_total\": 5928,\r\n \"NoAns_exact\": 84.9116904962153,\r\n \"NoAns_f1\": 84.9116904962153,\r\n \"NoAns_total\": 5945\r\n}\r\nThis is the command I used and the results",
"how much time does it take to train?\r\n"
] | 1,546 | 1,550 | 1,547 | CONTRIBUTOR | null | Accidentally closed the last pull request. Created Separate file for Squad 2.0 run with
python3 run_squad.py
--bert_model bert-large-uncased_up
--do_predict
--do_lower_case
--train_file squad/train-v2.0.json
--predict_file squad/dev-v2.0.json
--learning_rate 3e-5
--num_train_epochs 2
--max_seq_length 384
--doc_stride 128
--output_dir squad2_diff
--train_batch_size 24
--fp16
--loss_scale 128
--null_score_diff_threshold -2.6929588317871094
If null score not defined default value is 0.0 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/174/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/174",
"html_url": "https://github.com/huggingface/transformers/pull/174",
"diff_url": "https://github.com/huggingface/transformers/pull/174.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/174.patch",
"merged_at": 1547160045000
} |
https://api.github.com/repos/huggingface/transformers/issues/173 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/173/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/173/comments | https://api.github.com/repos/huggingface/transformers/issues/173/events | https://github.com/huggingface/transformers/issues/173 | 396,776,254 | MDU6SXNzdWUzOTY3NzYyNTQ= | 173 | What 's the mlm accuracy of pretrained model? | {
"login": "l126t",
"id": 21979549,
"node_id": "MDQ6VXNlcjIxOTc5NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/21979549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/l126t",
"html_url": "https://github.com/l126t",
"followers_url": "https://api.github.com/users/l126t/followers",
"following_url": "https://api.github.com/users/l126t/following{/other_user}",
"gists_url": "https://api.github.com/users/l126t/gists{/gist_id}",
"starred_url": "https://api.github.com/users/l126t/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/l126t/subscriptions",
"organizations_url": "https://api.github.com/users/l126t/orgs",
"repos_url": "https://api.github.com/users/l126t/repos",
"events_url": "https://api.github.com/users/l126t/events{/privacy}",
"received_events_url": "https://api.github.com/users/l126t/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, we didn't evaluate this metric. If you do feel free to share the results.\r\nRegarding the comparison between the Google and PyTorch implementations, please refere to the included Notebooks and the associated section of the readme."
] | 1,546 | 1,546 | 1,546 | NONE | null | What 's the mlm accuracy of pretrained model? In my case, I find the scores of candidate in top 10 are very closeοΌbut most are not suitable. Is this the same prediction as Google's original project?
_Originally posted by @l126t in https://github.com/huggingface/pytorch-pretrained-BERT/issues/155#issuecomment-452195676_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/173/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/172 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/172/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/172/comments | https://api.github.com/repos/huggingface/transformers/issues/172/events | https://github.com/huggingface/transformers/pull/172 | 396,731,874 | MDExOlB1bGxSZXF1ZXN0MjQyODQzOTI4 | 172 | Never split some texts. | {
"login": "WrRan",
"id": 7569098,
"node_id": "MDQ6VXNlcjc1NjkwOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7569098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WrRan",
"html_url": "https://github.com/WrRan",
"followers_url": "https://api.github.com/users/WrRan/followers",
"following_url": "https://api.github.com/users/WrRan/following{/other_user}",
"gists_url": "https://api.github.com/users/WrRan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WrRan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WrRan/subscriptions",
"organizations_url": "https://api.github.com/users/WrRan/orgs",
"repos_url": "https://api.github.com/users/WrRan/repos",
"events_url": "https://api.github.com/users/WrRan/events{/privacy}",
"received_events_url": "https://api.github.com/users/WrRan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please have a look. :)",
"Looks good indeed, thanks @WrRan!"
] | 1,546 | 1,548 | 1,547 | CONTRIBUTOR | null | I have noticed bert tokenize texts by two steps:
1. punctuation: split text to tokens
2. wordpiece: split token to word_pieces
Some texts such as `"[UNK]"` are supposed to be left as they are. However, they become `["[", "UNK", "]"]` or something like this.
This PR is to solve the above problem. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/172/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/172/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/172",
"html_url": "https://github.com/huggingface/transformers/pull/172",
"diff_url": "https://github.com/huggingface/transformers/pull/172.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/172.patch",
"merged_at": 1547037850000
} |
https://api.github.com/repos/huggingface/transformers/issues/171 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/171/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/171/comments | https://api.github.com/repos/huggingface/transformers/issues/171/events | https://github.com/huggingface/transformers/pull/171 | 396,382,404 | MDExOlB1bGxSZXF1ZXN0MjQyNTc4NjA0 | 171 | LayerNorm initialization | {
"login": "donglixp",
"id": 1070872,
"node_id": "MDQ6VXNlcjEwNzA4NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1070872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donglixp",
"html_url": "https://github.com/donglixp",
"followers_url": "https://api.github.com/users/donglixp/followers",
"following_url": "https://api.github.com/users/donglixp/following{/other_user}",
"gists_url": "https://api.github.com/users/donglixp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donglixp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donglixp/subscriptions",
"organizations_url": "https://api.github.com/users/donglixp/orgs",
"repos_url": "https://api.github.com/users/donglixp/repos",
"events_url": "https://api.github.com/users/donglixp/events{/privacy}",
"received_events_url": "https://api.github.com/users/donglixp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, the weights are overwritten by the loading model. But if the code is used to pretrain a model from scratch, it might affect performance. (related issue: https://github.com/huggingface/pytorch-pretrained-BERT/issues/143 )",
"Great, thanks @donglixp !"
] | 1,546 | 1,546 | 1,546 | CONTRIBUTOR | null | The LayerNorm gamma and beta should be initialized by .fill_(1.0) and .zero_().
reference links:
https://github.com/tensorflow/tensorflow/blob/989e78c412a7e0f5361d4d7dfdfb230c8136e749/tensorflow/contrib/layers/python/layers/layers.py#L2298
https://github.com/tensorflow/tensorflow/blob/989e78c412a7e0f5361d4d7dfdfb230c8136e749/tensorflow/contrib/layers/python/layers/layers.py#L2308 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/171/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/171",
"html_url": "https://github.com/huggingface/transformers/pull/171",
"diff_url": "https://github.com/huggingface/transformers/pull/171.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/171.patch",
"merged_at": 1546861487000
} |
https://api.github.com/repos/huggingface/transformers/issues/170 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/170/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/170/comments | https://api.github.com/repos/huggingface/transformers/issues/170/events | https://github.com/huggingface/transformers/issues/170 | 396,375,768 | MDU6SXNzdWUzOTYzNzU3Njg= | 170 | How to pretrain my own data with this pytorch code? | {
"login": "Gpwner",
"id": 19349207,
"node_id": "MDQ6VXNlcjE5MzQ5MjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gpwner",
"html_url": "https://github.com/Gpwner",
"followers_url": "https://api.github.com/users/Gpwner/followers",
"following_url": "https://api.github.com/users/Gpwner/following{/other_user}",
"gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions",
"organizations_url": "https://api.github.com/users/Gpwner/orgs",
"repos_url": "https://api.github.com/users/Gpwner/repos",
"events_url": "https://api.github.com/users/Gpwner/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gpwner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sir @Gpwner i think you have to refer to do pretrain for google/bert repo( https://github.com/google-research/bert#pre-training-with-bert ) and then convert tensorflow model as pytorch.",
"A pre-training script is now included in `master` thanks to @tholor's PR #124 ",
"> A pre-training script is now included in `master` thanks to @tholor's PR #124\r\nThanks οΌοΌ\r\nDoes it support Multiple GPUοΌBecause the official script does not support Multiple GPU",
"It does (you can read more about it [here in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#lm-fine-tuning))",
"> It does\r\n\r\ngreat jobοΌthanks ο½",
"All thanks should go to @tholor :-)"
] | 1,546 | 1,546 | 1,546 | NONE | null | I wonder how to pretrain with my own data. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/170/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/169 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/169/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/169/comments | https://api.github.com/repos/huggingface/transformers/issues/169/events | https://github.com/huggingface/transformers/pull/169 | 396,300,026 | MDExOlB1bGxSZXF1ZXN0MjQyNTIwODEw | 169 | Update modeling.py to fix typo | {
"login": "ichn-hu",
"id": 29735669,
"node_id": "MDQ6VXNlcjI5NzM1NjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/29735669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ichn-hu",
"html_url": "https://github.com/ichn-hu",
"followers_url": "https://api.github.com/users/ichn-hu/followers",
"following_url": "https://api.github.com/users/ichn-hu/following{/other_user}",
"gists_url": "https://api.github.com/users/ichn-hu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ichn-hu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ichn-hu/subscriptions",
"organizations_url": "https://api.github.com/users/ichn-hu/orgs",
"repos_url": "https://api.github.com/users/ichn-hu/repos",
"events_url": "https://api.github.com/users/ichn-hu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ichn-hu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @ichn-hu this issue was resolved in a previous PR."
] | 1,546 | 1,546 | 1,546 | NONE | null | Fix typo in the documentation for the description of not using masked_lm_labels | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/169/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/169",
"html_url": "https://github.com/huggingface/transformers/pull/169",
"diff_url": "https://github.com/huggingface/transformers/pull/169.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/169.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/168 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/168/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/168/comments | https://api.github.com/repos/huggingface/transformers/issues/168/events | https://github.com/huggingface/transformers/issues/168 | 396,232,776 | MDU6SXNzdWUzOTYyMzI3NzY= | 168 | Cannot reproduce the result of run_squad 1.1 | {
"login": "hmt2014",
"id": 9130751,
"node_id": "MDQ6VXNlcjkxMzA3NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9130751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hmt2014",
"html_url": "https://github.com/hmt2014",
"followers_url": "https://api.github.com/users/hmt2014/followers",
"following_url": "https://api.github.com/users/hmt2014/following{/other_user}",
"gists_url": "https://api.github.com/users/hmt2014/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hmt2014/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hmt2014/subscriptions",
"organizations_url": "https://api.github.com/users/hmt2014/orgs",
"repos_url": "https://api.github.com/users/hmt2014/repos",
"events_url": "https://api.github.com/users/hmt2014/events{/privacy}",
"received_events_url": "https://api.github.com/users/hmt2014/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I can reproduce the results, learning rate is 3e-5 , epoch is 2.0",
"by using fp16, the f1 is 90.8 ",
"> by using fp16, the f1 is 90.8\r\n\r\nSo the key is set fp16 is True?",
"@hmt2014 could you give the exact command line that you use to train your model?",
"Yes, please use the command line example indicated [here](https://github.com/huggingface/pytorch-pretrained-BERT#squad) in the readme for SQuAD."
] | 1,546 | 1,546 | 1,546 | NONE | null | I train 5 epochs with learning rate 5e-5, but my evaluation result is {'exact_match': 32.04351939451277, 'f1': 36.53574674513405}.
What is the problem? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/168/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/167 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/167/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/167/comments | https://api.github.com/repos/huggingface/transformers/issues/167/events | https://github.com/huggingface/transformers/issues/167 | 396,141,181 | MDU6SXNzdWUzOTYxNDExODE= | 167 | Question about hidden layers from pretained model | {
"login": "mvss80",
"id": 5709876,
"node_id": "MDQ6VXNlcjU3MDk4NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5709876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mvss80",
"html_url": "https://github.com/mvss80",
"followers_url": "https://api.github.com/users/mvss80/followers",
"following_url": "https://api.github.com/users/mvss80/following{/other_user}",
"gists_url": "https://api.github.com/users/mvss80/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mvss80/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mvss80/subscriptions",
"organizations_url": "https://api.github.com/users/mvss80/orgs",
"repos_url": "https://api.github.com/users/mvss80/repos",
"events_url": "https://api.github.com/users/mvss80/events{/privacy}",
"received_events_url": "https://api.github.com/users/mvss80/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes you are right. The first value returned is the output for `BertEncoder.forward`.\r\n\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/8da280ebbeca5ebd7561fd05af78c65df9161f92/pytorch_pretrained_bert/modeling.py#L623-L634 "
] | 1,546 | 1,546 | 1,546 | NONE | null | In the example shown to get hidden states https://github.com/huggingface/pytorch-pretrained-BERT#usage
I want to confirm - the final hidden layer corresponds to the last element of `encoded_layers`, right? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/167/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/166 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/166/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/166/comments | https://api.github.com/repos/huggingface/transformers/issues/166/events | https://github.com/huggingface/transformers/pull/166 | 396,125,596 | MDExOlB1bGxSZXF1ZXN0MjQyNDE1MzY5 | 166 | Fix error when `bert_model` param is path or url. | {
"login": "likejazz",
"id": 1250095,
"node_id": "MDQ6VXNlcjEyNTAwOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1250095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/likejazz",
"html_url": "https://github.com/likejazz",
"followers_url": "https://api.github.com/users/likejazz/followers",
"following_url": "https://api.github.com/users/likejazz/following{/other_user}",
"gists_url": "https://api.github.com/users/likejazz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/likejazz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/likejazz/subscriptions",
"organizations_url": "https://api.github.com/users/likejazz/orgs",
"repos_url": "https://api.github.com/users/likejazz/repos",
"events_url": "https://api.github.com/users/likejazz/events{/privacy}",
"received_events_url": "https://api.github.com/users/likejazz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok, looks good to me, thanks @likejazz "
] | 1,546 | 1,546 | 1,546 | CONTRIBUTOR | null | Error occurs when `bert_model` param is path or url. Therefore, if it is path, specify the last path to prevent error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/166/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/166",
"html_url": "https://github.com/huggingface/transformers/pull/166",
"diff_url": "https://github.com/huggingface/transformers/pull/166.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/166.patch",
"merged_at": 1546861256000
} |
https://api.github.com/repos/huggingface/transformers/issues/165 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/165/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/165/comments | https://api.github.com/repos/huggingface/transformers/issues/165/events | https://github.com/huggingface/transformers/pull/165 | 395,972,623 | MDExOlB1bGxSZXF1ZXN0MjQyMjk5MTM2 | 165 | fixed model names in help string | {
"login": "oliverguhr",
"id": 3495355,
"node_id": "MDQ6VXNlcjM0OTUzNTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3495355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverguhr",
"html_url": "https://github.com/oliverguhr",
"followers_url": "https://api.github.com/users/oliverguhr/followers",
"following_url": "https://api.github.com/users/oliverguhr/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverguhr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliverguhr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverguhr/subscriptions",
"organizations_url": "https://api.github.com/users/oliverguhr/orgs",
"repos_url": "https://api.github.com/users/oliverguhr/repos",
"events_url": "https://api.github.com/users/oliverguhr/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliverguhr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi Oliver,\r\n\r\nI've already created a pull request which fixes this problem for all the example files (#156).\r\n\r\nCheers!",
"Ok, great."
] | 1,546 | 1,546 | 1,546 | CONTRIBUTOR | null | Set correct model names according to modeling.py | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/165/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/165",
"html_url": "https://github.com/huggingface/transformers/pull/165",
"diff_url": "https://github.com/huggingface/transformers/pull/165.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/165.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/164 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/164/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/164/comments | https://api.github.com/repos/huggingface/transformers/issues/164/events | https://github.com/huggingface/transformers/issues/164 | 395,941,645 | MDU6SXNzdWUzOTU5NDE2NDU= | 164 | pretrained model | {
"login": "minmummax",
"id": 25759762,
"node_id": "MDQ6VXNlcjI1NzU5NzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/25759762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minmummax",
"html_url": "https://github.com/minmummax",
"followers_url": "https://api.github.com/users/minmummax/followers",
"following_url": "https://api.github.com/users/minmummax/following{/other_user}",
"gists_url": "https://api.github.com/users/minmummax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minmummax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minmummax/subscriptions",
"organizations_url": "https://api.github.com/users/minmummax/orgs",
"repos_url": "https://api.github.com/users/minmummax/repos",
"events_url": "https://api.github.com/users/minmummax/events{/privacy}",
"received_events_url": "https://api.github.com/users/minmummax/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"All the code related to word embeddings is located there https://github.com/huggingface/pytorch-pretrained-BERT/blob/8da280ebbeca5ebd7561fd05af78c65df9161f92/pytorch_pretrained_bert/modeling.py#L172-L200\r\n\r\nIf you want to access pretrained embeddings, the easier thing to do would be to load a pretrained model and extract its embedding matrices.",
"> All the code related to word embeddings is located there\r\n> \r\n> [pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/8da280ebbeca5ebd7561fd05af78c65df9161f92/pytorch_pretrained_bert/modeling.py#L172-L200)\r\n> \r\n> Lines 172 to 200 in [8da280e](/huggingface/pytorch-pretrained-BERT/commit/8da280ebbeca5ebd7561fd05af78c65df9161f92)\r\n> \r\n> class BertEmbeddings(nn.Module): \r\n> \"\"\"Construct the embeddings from word, position and token_type embeddings. \r\n> \"\"\" \r\n> def __init__(self, config): \r\n> super(BertEmbeddings, self).__init__() \r\n> self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size) \r\n> self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) \r\n> self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size) \r\n> \r\n> # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load \r\n> # any TensorFlow checkpoint file \r\n> self.LayerNorm = BertLayerNorm(config.hidden_size, eps=1e-12) \r\n> self.dropout = nn.Dropout(config.hidden_dropout_prob) \r\n> \r\n> def forward(self, input_ids, token_type_ids=None): \r\n> seq_length = input_ids.size(1) \r\n> position_ids = torch.arange(seq_length, dtype=torch.long, device=input_ids.device) \r\n> position_ids = position_ids.unsqueeze(0).expand_as(input_ids) \r\n> if token_type_ids is None: \r\n> token_type_ids = torch.zeros_like(input_ids) \r\n> \r\n> words_embeddings = self.word_embeddings(input_ids) \r\n> position_embeddings = self.position_embeddings(position_ids) \r\n> token_type_embeddings = self.token_type_embeddings(token_type_ids) \r\n> \r\n> embeddings = words_embeddings + position_embeddings + token_type_embeddings \r\n> embeddings = self.LayerNorm(embeddings) \r\n> embeddings = self.dropout(embeddings) \r\n> return embeddings \r\n> If you want to access pretrained embeddings, the easier thing to do would be to load a pretrained model and extract its embedding matrices.\r\n\r\noh I have seen this code these days . and from this code I think it dose not use the pretrained embedding paras , and what do you mean by load and extract a pretrained model ???? Is it from the original supplies",
"```python\r\nIn [1]: from pytorch_pretrained_bert import BertModel \r\n\r\nIn [2]: model = BertModel.from_pretrained('bert-base-uncased') \r\n\r\nIn [3]: model.embeddings.word_embeddings \r\nOut[3]: Embedding(30522, 768)\r\n```\r\n\r\nThis field of the `BertEmbeddings` class contains the pretrained embeddings. It gets set by calling `BertModel.from_pretrained`.",
"Thanks Gregory that the way to go indeed!"
] | 1,546 | 1,546 | 1,546 | NONE | null | is the pretrained model downloaded include word embedding?
I do not see any embedding in your code
please | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/164/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/163 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/163/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/163/comments | https://api.github.com/repos/huggingface/transformers/issues/163/events | https://github.com/huggingface/transformers/issues/163 | 395,893,030 | MDU6SXNzdWUzOTU4OTMwMzA= | 163 | TypeError: Class advice impossible in Python3 | {
"login": "lynnna-xu",
"id": 45704491,
"node_id": "MDQ6VXNlcjQ1NzA0NDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/45704491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lynnna-xu",
"html_url": "https://github.com/lynnna-xu",
"followers_url": "https://api.github.com/users/lynnna-xu/followers",
"following_url": "https://api.github.com/users/lynnna-xu/following{/other_user}",
"gists_url": "https://api.github.com/users/lynnna-xu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lynnna-xu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lynnna-xu/subscriptions",
"organizations_url": "https://api.github.com/users/lynnna-xu/orgs",
"repos_url": "https://api.github.com/users/lynnna-xu/repos",
"events_url": "https://api.github.com/users/lynnna-xu/events{/privacy}",
"received_events_url": "https://api.github.com/users/lynnna-xu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I came across this error after running `import pytorch_pretrained_bert`. My configurations are as follows:\r\ntorch version 1.0.0\r\npython version 3.6\r\ncuda 9.2",
"I uninstalled the old version of apex and reinstalled a new version. It worked. Thanks.\r\n\r\ngit clone https://www.github.com/nvidia/apex\r\ncd apex\r\npython setup.py install",
"I still have the problem in Google Colab\r\n",
"> I still have the problem in Google Colab\r\n\r\nHello! I also had the problem and now I could solve it. Please install apex exactly as described above:\r\n\r\ngit clone https://www.github.com/nvidia/apex\r\ncd apex\r\npython setup.py install\r\n\r\nDouble check the following: The git command creates a folder called apex. In this folder is another folder called apex. This folder is the folder of interest. Please rename the folder on the top level (e.g. apex-2) and move the lower apex folder to the main level. Then python will also find the folder and it should work.\r\n\r\n<img width=\"316\" alt=\"Bildschirmfoto 2020-05-26 um 17 31 44\" src=\"https://user-images.githubusercontent.com/25347417/82919812-cba88e00-9f76-11ea-9d35-8918b9d83f1c.png\">\r\n\r\n Make sure that you have the version (0.1). Double check it with: \"!pip list\". ",
"The following command did the job for me (based on @fbaeumer's answer):\r\n`pip install git+https://www.github.com/nvidia/apex`"
] | 1,546 | 1,590 | 1,546 | NONE | null | ---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-ee86003eab97> in <module>()
----> 1 from pytorch_pretrained_bert import BertTokenizer
/opt/conda/envs/py3/lib/python3.6/site-packages/pytorch_pretrained_bert/__init__.py in <module>()
1 __version__ = "0.4.0"
2 from .tokenization import BertTokenizer, BasicTokenizer, WordpieceTokenizer
----> 3 from .modeling import (BertConfig, BertModel, BertForPreTraining,
4 BertForMaskedLM, BertForNextSentencePrediction,
5 BertForSequenceClassification, BertForMultipleChoice,
/opt/conda/envs/py3/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py in <module>()
152
153 try:
--> 154 from apex.normalization.fused_layer_norm import FusedLayerNorm as BertLayerNorm
155 except ImportError:
156 print("Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.")
/opt/conda/envs/py3/lib/python3.6/site-packages/apex/__init__.py in <module>()
16 from apex.exceptions import (ApexAuthSecret,
17 ApexSessionSecret)
---> 18 from apex.interfaces import (ApexImplementation,
19 IApex)
20 from apex.lib.libapex import (groupfinder,
/opt/conda/envs/py3/lib/python3.6/site-packages/apex/interfaces.py in <module>()
8 pass
9
---> 10 class ApexImplementation(object):
11 """ Class so that we can tell if Apex is installed from other
12 applications
/opt/conda/envs/py3/lib/python3.6/site-packages/apex/interfaces.py in ApexImplementation()
12 applications
13 """
---> 14 implements(IApex)
/opt/conda/envs/py3/lib/python3.6/site-packages/zope/interface/declarations.py in implements(*interfaces)
481 # the coverage for this block there. :(
482 if PYTHON3:
--> 483 raise TypeError(_ADVICE_ERROR % 'implementer')
484 _implements("implements", interfaces, classImplements)
485
TypeError: Class advice impossible in Python3. Use the @implementer class decorator instead. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/163/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/162 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/162/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/162/comments | https://api.github.com/repos/huggingface/transformers/issues/162/events | https://github.com/huggingface/transformers/pull/162 | 395,840,540 | MDExOlB1bGxSZXF1ZXN0MjQyMTk3MDM5 | 162 | BertTokenizerμμ do_lower_caseμ κ΄κ³μμ΄ | {
"login": "davidkim205",
"id": 16680469,
"node_id": "MDQ6VXNlcjE2NjgwNDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/16680469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidkim205",
"html_url": "https://github.com/davidkim205",
"followers_url": "https://api.github.com/users/davidkim205/followers",
"following_url": "https://api.github.com/users/davidkim205/following{/other_user}",
"gists_url": "https://api.github.com/users/davidkim205/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidkim205/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidkim205/subscriptions",
"organizations_url": "https://api.github.com/users/davidkim205/orgs",
"repos_url": "https://api.github.com/users/davidkim205/repos",
"events_url": "https://api.github.com/users/davidkim205/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidkim205/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,546 | 1,546 | 1,546 | NONE | null | do_lower_case κ° νμ Trueλ‘ λλ λ¬Έμ μμ
ref #1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/162/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/162",
"html_url": "https://github.com/huggingface/transformers/pull/162",
"diff_url": "https://github.com/huggingface/transformers/pull/162.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/162.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/161 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/161/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/161/comments | https://api.github.com/repos/huggingface/transformers/issues/161/events | https://github.com/huggingface/transformers/issues/161 | 395,581,367 | MDU6SXNzdWUzOTU1ODEzNjc= | 161 | Predict Mode: Weights of BertForQuestionAnswering not initialized from pretrained model | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@thomwolf Can you please reply here. This issue is different from issue#160\r\n#160 is for training mode, this issue is for prediction mode. ( I hope prediction can run on CPU)",
"Those messages are correct, the pretrained weights that have been released by Google Brain are just the ones of the core network. They did not release task specific (such as SQuAD) weights. To get a model that solves this task, you would have to train one yourself or get it from someone else.\r\n\r\nTo answer your second question, yes, predictions can run on CPU.",
"@rodgzilla is right (even though I think prediction on CPU will be very slow, you should use a GPU)",
"Hi @rodgzilla and @thomwolf Thanks for reply. I figured out after reply that in CPU, GPU talks, I forgot to train the model with squad training data, that is why those warning messages were coming. \r\n \r\ntraining on 1 GPU took 2.5 hours per epoch (I had to reduce the max_seq_length due to large memory consumption). And after training prediction is running fine on CPU. \r\n \r\nWould you like to comment on below 2 observations: \r\n1. pytorch version is faster than tensorflow version. Good but why.\r\n2. Answers of tensorflow version and pytorch version are different, even though the training, question and data title is same.",
"\r\n> Those messages are correct, the pretrained weights that have been released by Google Brain are just the ones of the core network. They did not release task-specific (such as SQuAD) weights. To get a model that solves this task, you would have to train one yourself or get it from someone else.\r\n\r\nDoes it mean that training-phase will not train BERT transformer parameters? If BERT params are tuned during the training phase, it should be stored in the output model. During the prediction time, tuned params should be used instead of loading BERT params from the original file.\r\n\r\n"
] | 1,546 | 1,548 | 1,546 | NONE | null | I am running following in just prediction mode:
`(berttorch363) sandeepbhutani304@pytorch-bert-2:~/pytorch-pretrained-BERT/examples$ python run_squad.py --bert_model bert-large-uncased --do_predict --do_lower_case --predict_file $SQUAD_DIR/dev-v1.1_sand.json --train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2.0 --max_seq_length 384 --doc_stride 128 --output_dir debug_squad9`
Every time I am getting following **not initialized message** (**Answers are also wrong**, both with bert-base and bert-large). Is something wrong going on ? (I am running prediction only on CPU)
```
01/03/2019 13:43:09 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForQuestionAnswering not initialized from pretrained model: ['qa_outputs.weight', 'qa_outputs.bias']
01/03/2019 13:43:09 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
```
Complete log (if interested) is below:
```
(berttorch363) sandeepbhutani304@pytorch-bert-2:~/pytorch-pretrained-BERT/examples$ python run_squad.py --bert_model bert-base-uncased --do_predict --do_lower_case --predict_file $SQUAD_DIR/dev-v1.1_sand.json --train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2.0 --max_seq_length 384 --doc_stride 128 --output_dir debug_squad09
Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.
/opt/anaconda3/envs/berttorch363/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py
01/03/2019 13:50:06 - INFO - __main__ - device: cpu n_gpu: 0, distributed training: False, 16-bits training: False
01/03/2019 13:50:07 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /home/sandeepbhutani304/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
01/03/2019 13:50:08 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /home/sandeepbhutani304/.pytorch_pretrained_bert/distributed_-1/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba
01/03/2019 13:50:08 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /home/sandeepbhutani304/.pytorch_pretrained_bert/distributed_-1/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmpm3rr0ye3
01/03/2019 13:50:14 - INFO - pytorch_pretrained_bert.modeling - Model config {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 30522
}
01/03/2019 13:50:18 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
01/03/2019 13:50:20 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /home/sandeepbhutani304/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba
01/03/2019 13:50:20 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /home/sandeepbhutani304/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmpt86p1sz4
01/03/2019 13:50:32 - INFO - pytorch_pretrained_bert.modeling - Model config {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 30522
}
01/03/2019 13:50:35 - INFO - __main__ - *** Example ***
01/03/2019 13:50:35 - INFO - __main__ - unique_id: 1000000000
01/03/2019 13:50:35 - INFO - __main__ - example_index: 0
01/03/2019 13:50:35 - INFO - __main__ - doc_span_index: 0
ce , a hare saw a tor ##to ##ise walking slowly with a heavy shell on his back . the hare was very proud of himself and he asked the tor ##to ##ise . ' shall we have a race ? ' the tor ##to ##ise agreed . they started the running race . the hare ran very fast . but the tor ##to ##ise walked very slowly . the proud hair rested under a tree and soon slept off . but the tor ##to ##ise walked very fast , slowly and steadily and reached the goal . at last , the tor ##to ##ise won the race . moral : pride goes before a fall . [SEP]
01/03/2019 13:50:35 - INFO - __main__ - token_to_orig_map:
--- ommited log ---
01/03/2019 13:50:35 - INFO - __main__ - ***** Running predictions *****
01/03/2019 13:50:35 - INFO - __main__ - Num orig examples = 1
01/03/2019 13:50:35 - INFO - __main__ - Num split examples = 1
01/03/2019 13:50:35 - INFO - __main__ - Batch size = 8
01/03/2019 13:50:35 - INFO - __main__ - Start evaluating
Evaluating: 0%| | 0/1 [00:00<?, ?it/s]01/03/2019 13:50:35 - INFO - __main__ - Processing example: 0
Evaluating: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.26s/it]
01/03/2019 13:50:37 - INFO - __main__ - Writing predictions to: debug_squad09/predictions.json
01/03/2019 13:50:37 - INFO - __main__ - Writing nbest to: debug_squad09/nbest_predictions.json
(berttorch363) sandeepbhutani304@pytorch-bert-2:~/pytorch-pretrained-BERT/examples$
```` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/161/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/160 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/160/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/160/comments | https://api.github.com/repos/huggingface/transformers/issues/160/events | https://github.com/huggingface/transformers/issues/160 | 395,555,064 | MDU6SXNzdWUzOTU1NTUwNjQ= | 160 | Weights of BertForQuestionAnswering not initialized from pretrained model | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What kind of GPU are you using?",
"I am on CPU as of now\r\n\r\n```\r\n(berttorch363) sandeepbhutani304@pytorch-bert-2:~/pytorch-pretrained-BERT/examples$ lscpu\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nCPU(s): 2\r\nOn-line CPU(s) list: 0,1\r\nThread(s) per core: 2\r\nCore(s) per socket: 1\r\nSocket(s): 1\r\nNUMA node(s): 1\r\nVendor ID: GenuineIntel\r\nCPU family: 6\r\nModel: 85\r\nModel name: Intel(R) Xeon(R) CPU @ 2.00GHz\r\nStepping: 3\r\nCPU MHz: 2000.146\r\nBogoMIPS: 4000.29\r\nHypervisor vendor: KVM\r\nVirtualization type: full\r\nL1d cache: 32K\r\nL1i cache: 32K\r\nL2 cache: 256K\r\nL3 cache: 56320K\r\nNUMA node0 CPU(s): 0,1\r\n```",
"I don't think it is possible to use BERT on CPU (didn't work for me). The model is too big.\r\nIf you find a way, feel free to re-open the issue.",
"Thanks for confirmation. Is \"Fine Tuning\" training also not possible for \"BERT BASE\" on CPU? \r\nCorrect me if I am wrong, following command is doing fine tuning training only\r\n`python run_squad.py --bert_model bert-base-uncased --do_train --do_predict --do_lower_case --train_file $SQUAD_DIR/train-v1.1.json --predict_file $SQUAD_DIR/dev-v1.1_sand.json --train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2.0 --max_seq_length 384 --doc_stride 128 --output_dir debug_squad9`",
"You are right.\r\nAnd no, unfortunately, fine-tuning is not possible on CPU in my opinion.",
"When I use NVIDIA Corporation GM200 [GeForce GTX TITAN X], I also have this problem\r\n\r\n",
"Sorry - why is the model \"too big\" to be trained on CPU? Shouldn't the memory requirements of the GPU and CPU be basically the same? As far as I can tell, BERT should run on 12GB GPUs, plenty of CPU machines have more RAM than that. Or is there a difference in how the model is materialized in memory between CPU and GPU training?",
"My first comment was badly worded. You can `run` the model on CPU but `training` it on CPU is unrealistic.",
"Can you elaborate? My machine has 30gb of ram but indeed I've found I am running out of memory. How come a 12gb gpu is enough, what makes the difference? Also I'm talking about fine-tuning a multiple choice model, just for context.",
"How big your data @phdowling ? I have TPU for training, if you want I will give access to you"
] | 1,546 | 1,591 | 1,546 | NONE | null | Trying to run cloned code from git but not able to train. Please suggest
`python run_squad.py --bert_model bert-base-uncased --do_train --do_predict --do_lower_case --train_file $SQUAD_DIR/train-v1.1.json --predict_file $SQUAD_DIR/dev-v1.1_sand.json --train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2.0 --max_seq_length 384 --doc_stride 128 --output_dir debug_squad9`
I am constantly getting this error when run in training mode:
```
01/03/2019 12:22:39 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForQuestionAnswering not initialized from pretrained model: ['qa_outputs.weight', 'qa_outputs.bias']
01/03/2019 12:22:39 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
hon3.6/site-packages/torch/nn/functional.py", line 749, in dropout
else _VF.dropout(input, p, training))
RuntimeError: $ Torch: not enough memory: you tried to allocate 0GB. Buy new RAM! at /pytorch/aten/src/TH/THGeneral.cpp:201
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/160/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/159 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/159/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/159/comments | https://api.github.com/repos/huggingface/transformers/issues/159/events | https://github.com/huggingface/transformers/pull/159 | 395,535,069 | MDExOlB1bGxSZXF1ZXN0MjQxOTY1NTU4 | 159 | Allow do_eval to be used without do_train and to use the pretrained model in the output folder | {
"login": "jaderabbit",
"id": 5547095,
"node_id": "MDQ6VXNlcjU1NDcwOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5547095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaderabbit",
"html_url": "https://github.com/jaderabbit",
"followers_url": "https://api.github.com/users/jaderabbit/followers",
"following_url": "https://api.github.com/users/jaderabbit/following{/other_user}",
"gists_url": "https://api.github.com/users/jaderabbit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaderabbit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaderabbit/subscriptions",
"organizations_url": "https://api.github.com/users/jaderabbit/orgs",
"repos_url": "https://api.github.com/users/jaderabbit/repos",
"events_url": "https://api.github.com/users/jaderabbit/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaderabbit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed!"
] | 1,546 | 1,546 | 1,546 | CONTRIBUTOR | null | If you wanted to use the pre-trained model to redo evaluation without training, it errors because the output directory already exists (the output directory that contains the pre-trained model that one might like to evaluate).
Additionally, a couple of fields are not initialised if one does not train and only evaluates.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/159/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/159/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/159",
"html_url": "https://github.com/huggingface/transformers/pull/159",
"diff_url": "https://github.com/huggingface/transformers/pull/159.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/159.patch",
"merged_at": 1546860667000
} |
https://api.github.com/repos/huggingface/transformers/issues/158 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/158/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/158/comments | https://api.github.com/repos/huggingface/transformers/issues/158/events | https://github.com/huggingface/transformers/issues/158 | 395,517,012 | MDU6SXNzdWUzOTU1MTcwMTI= | 158 | AttributeError: 'BertForPreTraining' object has no attribute 'global_step' | {
"login": "MuruganR96",
"id": 35978784,
"node_id": "MDQ6VXNlcjM1OTc4Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/35978784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MuruganR96",
"html_url": "https://github.com/MuruganR96",
"followers_url": "https://api.github.com/users/MuruganR96/followers",
"following_url": "https://api.github.com/users/MuruganR96/following{/other_user}",
"gists_url": "https://api.github.com/users/MuruganR96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MuruganR96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MuruganR96/subscriptions",
"organizations_url": "https://api.github.com/users/MuruganR96/orgs",
"repos_url": "https://api.github.com/users/MuruganR96/repos",
"events_url": "https://api.github.com/users/MuruganR96/events{/privacy}",
"received_events_url": "https://api.github.com/users/MuruganR96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have the same issue",
"Did you find any solution?\r\n",
"I have the same issue too"
] | 1,546 | 1,596 | 1,546 | NONE | null | @thomwolf sir, i am also same issue (https://github.com/huggingface/pytorch-pretrained-BERT/issues/50#issuecomment-440624216). it doen't resolve. how i am convert my finetuned pretrained model to pytorch?
```
export BERT_BASE_DIR=/home/dell/backup/NWP/bert-base-uncased/bert_tensorflow_e100
pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch \
$BERT_BASE_DIR/model.ckpt-100 \
$BERT_BASE_DIR/bert_config.json \
$BERT_BASE_DIR/pytorch_model.bin
```
```
Traceback (most recent call last):
File "/home/dell/Downloads/Downloads/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/dell/Downloads/Downloads/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/dell/backup/bert_env/lib/python3.6/site-packages/pytorch_pretrained_bert/__main__.py", line 19, in <module>
convert_tf_checkpoint_to_pytorch(TF_CHECKPOINT, TF_CONFIG, PYTORCH_DUMP_OUTPUT)
File "/home/dell/backup/bert_env/lib/python3.6/site-packages/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 69, in convert_tf_checkpoint_to_pytorch
pointer = getattr(pointer, l[0])
File "/home/dell/backup/bert_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 535, in __getattr__
type(self).__name__, name))
AttributeError: 'BertForPreTraining' object has no attribute 'global_step'
```
sir how to resolve this issue?
thanks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/158/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/157 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/157/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/157/comments | https://api.github.com/repos/huggingface/transformers/issues/157/events | https://github.com/huggingface/transformers/issues/157 | 395,255,132 | MDU6SXNzdWUzOTUyNTUxMzI= | 157 | Is it feasible to set num_workers>=1 in DataLoader to quickly load data? | {
"login": "lixinsu",
"id": 15691697,
"node_id": "MDQ6VXNlcjE1NjkxNjk3",
"avatar_url": "https://avatars.githubusercontent.com/u/15691697?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lixinsu",
"html_url": "https://github.com/lixinsu",
"followers_url": "https://api.github.com/users/lixinsu/followers",
"following_url": "https://api.github.com/users/lixinsu/following{/other_user}",
"gists_url": "https://api.github.com/users/lixinsu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lixinsu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lixinsu/subscriptions",
"organizations_url": "https://api.github.com/users/lixinsu/orgs",
"repos_url": "https://api.github.com/users/lixinsu/repos",
"events_url": "https://api.github.com/users/lixinsu/events{/privacy}",
"received_events_url": "https://api.github.com/users/lixinsu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Are you asking if it possible or do you want this change included to the code?\r\n\r\nI don't see why this change would cause a problem, if we choose to implement it we should add a command line argument to specify this value.",
"Yes, feel free to submit a PR if you have a working implementation.",
"@thomwolf @rodgzilla Is it still the case with the new Trainer?"
] | 1,546 | 1,591 | 1,546 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/157/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/156 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/156/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/156/comments | https://api.github.com/repos/huggingface/transformers/issues/156/events | https://github.com/huggingface/transformers/pull/156 | 395,241,547 | MDExOlB1bGxSZXF1ZXN0MjQxNzQyMDE3 | 156 | Adding new pretrained model to the help of the `bert_model` argument. | {
"login": "rodgzilla",
"id": 12107203,
"node_id": "MDQ6VXNlcjEyMTA3MjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/12107203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rodgzilla",
"html_url": "https://github.com/rodgzilla",
"followers_url": "https://api.github.com/users/rodgzilla/followers",
"following_url": "https://api.github.com/users/rodgzilla/following{/other_user}",
"gists_url": "https://api.github.com/users/rodgzilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rodgzilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rodgzilla/subscriptions",
"organizations_url": "https://api.github.com/users/rodgzilla/orgs",
"repos_url": "https://api.github.com/users/rodgzilla/repos",
"events_url": "https://api.github.com/users/rodgzilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/rodgzilla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks Gregory!"
] | 1,546 | 1,546 | 1,546 | CONTRIBUTOR | null | The help for the `bert_model` command line argument has not been updated in the examples files. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/156/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/156",
"html_url": "https://github.com/huggingface/transformers/pull/156",
"diff_url": "https://github.com/huggingface/transformers/pull/156.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/156.patch",
"merged_at": 1546860437000
} |
https://api.github.com/repos/huggingface/transformers/issues/155 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/155/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/155/comments | https://api.github.com/repos/huggingface/transformers/issues/155/events | https://github.com/huggingface/transformers/issues/155 | 394,870,891 | MDU6SXNzdWUzOTQ4NzA4OTE= | 155 | Why not the mlm use the information of adjacent sentences? | {
"login": "l126t",
"id": 21979549,
"node_id": "MDQ6VXNlcjIxOTc5NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/21979549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/l126t",
"html_url": "https://github.com/l126t",
"followers_url": "https://api.github.com/users/l126t/followers",
"following_url": "https://api.github.com/users/l126t/following{/other_user}",
"gists_url": "https://api.github.com/users/l126t/gists{/gist_id}",
"starred_url": "https://api.github.com/users/l126t/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/l126t/subscriptions",
"organizations_url": "https://api.github.com/users/l126t/orgs",
"repos_url": "https://api.github.com/users/l126t/repos",
"events_url": "https://api.github.com/users/l126t/events{/privacy}",
"received_events_url": "https://api.github.com/users/l126t/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The model is already using adjacent sentences to make its predictions, it just happens to be wrong in your case.\r\n\r\nIf you would like to make it choose from a specific list of words, you could use the code that I mentionned in #80. ",
"Thanks @rodgzilla !",
"What 's the mlm accuracy of pretrained model? I find the scores of candidate in top 10 are very closeοΌbut most are not suitable. Is this the same prediction as Google's original project?"
] | 1,546 | 1,546 | 1,546 | NONE | null |
I prepare two sentences for mlm predict the mask part:"Tom cant run fast. He [mask] his back a few years ago." The result of model (uncased base) is 'got'. That is meaningless. Obviously ,"hurt" is better.
I wander how to make mlm to use the information of adjacent sentences. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/155/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/154 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/154/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/154/comments | https://api.github.com/repos/huggingface/transformers/issues/154/events | https://github.com/huggingface/transformers/issues/154 | 394,865,030 | MDU6SXNzdWUzOTQ4NjUwMzA= | 154 | the run_squad report "for training,each question should exactly have 1 answer" when I tried to fintune bert on squad2.0 | {
"login": "zhaoguangxiang",
"id": 17742385,
"node_id": "MDQ6VXNlcjE3NzQyMzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/17742385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaoguangxiang",
"html_url": "https://github.com/zhaoguangxiang",
"followers_url": "https://api.github.com/users/zhaoguangxiang/followers",
"following_url": "https://api.github.com/users/zhaoguangxiang/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaoguangxiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaoguangxiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaoguangxiang/subscriptions",
"organizations_url": "https://api.github.com/users/zhaoguangxiang/orgs",
"repos_url": "https://api.github.com/users/zhaoguangxiang/repos",
"events_url": "https://api.github.com/users/zhaoguangxiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaoguangxiang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,546 | 1,546 | 1,546 | NONE | null | But some questions of train-v2.0.json are unanswerable. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/154/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/153 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/153/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/153/comments | https://api.github.com/repos/huggingface/transformers/issues/153/events | https://github.com/huggingface/transformers/issues/153 | 394,864,622 | MDU6SXNzdWUzOTQ4NjQ2MjI= | 153 | Did you suport squad2.0 | {
"login": "zhaoguangxiang",
"id": 17742385,
"node_id": "MDQ6VXNlcjE3NzQyMzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/17742385?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaoguangxiang",
"html_url": "https://github.com/zhaoguangxiang",
"followers_url": "https://api.github.com/users/zhaoguangxiang/followers",
"following_url": "https://api.github.com/users/zhaoguangxiang/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaoguangxiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaoguangxiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaoguangxiang/subscriptions",
"organizations_url": "https://api.github.com/users/zhaoguangxiang/orgs",
"repos_url": "https://api.github.com/users/zhaoguangxiang/repos",
"events_url": "https://api.github.com/users/zhaoguangxiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaoguangxiang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is still not supported but should be soon. You can follow/try this PR by @abeljim here:\r\n\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/pull/152",
"This is now on master"
] | 1,546 | 1,547 | 1,547 | NONE | null | What is the command to reproduce the results of squad2.0 reported in the BERT.
Thanks~ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/153/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/152 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/152/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/152/comments | https://api.github.com/repos/huggingface/transformers/issues/152/events | https://github.com/huggingface/transformers/pull/152 | 394,759,764 | MDExOlB1bGxSZXF1ZXN0MjQxNDIzMzk2 | 152 | Squad 2.0 | {
"login": "abeljim",
"id": 34782317,
"node_id": "MDQ6VXNlcjM0NzgyMzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/34782317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abeljim",
"html_url": "https://github.com/abeljim",
"followers_url": "https://api.github.com/users/abeljim/followers",
"following_url": "https://api.github.com/users/abeljim/following{/other_user}",
"gists_url": "https://api.github.com/users/abeljim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abeljim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abeljim/subscriptions",
"organizations_url": "https://api.github.com/users/abeljim/orgs",
"repos_url": "https://api.github.com/users/abeljim/repos",
"events_url": "https://api.github.com/users/abeljim/events{/privacy}",
"received_events_url": "https://api.github.com/users/abeljim/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The new run_squad.py can train and predict , but can't predict only.\r\n![image](https://user-images.githubusercontent.com/17742385/50643749-e48f5880-0fa9-11e9-9258-9872c6ddaaba.png)\r\n\r\n\r\n",
"in the predict only mode, len(nbest) is always 1",
"my scripts for predicting is \r\n\r\n\r\npython3 run_squad.py \\\r\n --bert_model bert-large-uncased_up \\\r\n --do_predict \\\r\n --do_lower_case \\\r\n --train_file squad/train-v2.0.json \\\r\n --predict_file squad/dev-v2.0.json \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2 \\\r\n --max_seq_length 384 \\\r\n --doc_stride 128 \\\r\n --output_dir squad2_diff \\\r\n --train_batch_size 24 \\\r\n --fp16 \\\r\n --loss_scale 128 \\\r\n --version_2_with_negative \\\r\n --null_score_diff_threshold -2.6929588317871094\r\n \r\n \r\n",
"I've found the reason. Previously trained models are needed for prediction, but your code will check whether output_dir already exists and error will be reported if it exists, which is unreasonable.",
"the output_dir during evaluation should be consistent with the training process",
"the f1 without thersh is 80.8 and the best_f1_thesh is 81.2",
"Thanks for verifying the model works. I think predict not working by itself is not because of my code. I can create another pull request and fix it there. I look into it tomorrow morning.",
"Hi @abeljim, thanks for this, it looks nice!\r\n\r\nWould you mind creating a separate example (e.g. called `run_squad_2.py`) instead of modifying `run_squad.py`? It will be less error prone and easier to document/maintain.",
"@thomwolf Yeah no problem I will work on that today ",
"> @zhaoguangxiang i trained as your script, like following:\r\n> python run_squad.py \r\n> --bert_model bert-large-uncased \r\n> --do_train \r\n> --do_predict \r\n> --do_lower_case \r\n> --train_file $SQUAD_DIR/train-v2.0.json \r\n> --predict_file $SQUAD_DIR/dev-v2.0.json \r\n> --learning_rate 3e-5 \r\n> --num_train_epochs 2.0 \r\n> --max_seq_length 384 \r\n> --doc_stride 128 \r\n> --output_dir /tmp/squad3 \r\n> --train_batch_size 1 \r\n> --loss_scale 128 \r\n> --version_2_with_negative \r\n> --null_score_diff_threshold -2.6929588317871094\r\n> but i got the result like this:\r\n> {\r\n> \"exact\": 10.595468710519667,\r\n> \"f1\": 13.067377840768328,\r\n> \"total\": 11873,\r\n> \"HasAns_exact\": 0.5398110661268556,\r\n> \"HasAns_f1\": 5.490718134858687,\r\n> \"HasAns_total\": 5928,\r\n> \"NoAns_exact\": 20.62237174095879,\r\n> \"NoAns_f1\": 20.62237174095879,\r\n> \"NoAns_total\": 5945\r\n> }\r\n> i don't know why i got the wrong results, really need your help. thx.\r\n\r\nthe train_batch_size in your script is too small.",
"\r\n\r\n\r\n> my scripts for predicting is\r\n> \r\n> python3 run_squad.py\r\n> --bert_model bert-large-uncased_up\r\n> --do_predict\r\n> --do_lower_case\r\n> --train_file squad/train-v2.0.json\r\n> --predict_file squad/dev-v2.0.json\r\n> --learning_rate 3e-5\r\n> --num_train_epochs 2\r\n> --max_seq_length 384\r\n> --doc_stride 128\r\n> --output_dir squad2_diff\r\n> --train_batch_size 24\r\n> --fp16\r\n> --loss_scale 128\r\n> --version_2_with_negative\r\n> --null_score_diff_threshold -2.6929588317871094\r\n\r\nHi, How to find the best `null_score_diff_threshold` ? "
] | 1,546 | 1,572 | 1,546 | CONTRIBUTOR | null | Added Squad 2.0 support. It has been tested on Bert large with a null threshold of zero with the result of
{
"exact": 75.26320222353239,
"f1": 78.41636742280099,
"total": 11873,
"HasAns_exact": 74.51079622132254,
"HasAns_f1": 80.82616909765808,
"HasAns_total": 5928,
"NoAns_exact": 76.01345668629101,
"NoAns_f1": 76.01345668629101,
"NoAns_total": 5945
}
I believe the score will match google's 83 with a null threshold between -1 and -5
run with [--version_2_with_negative] flag for SQuAD 2.0
[--null_score_diff_threshold $NULL_Threshold] to change threshold default value 0.0
Tested Squad 1.1 with BERT base and it seems not to break it results:
{"exact_match": 79.73509933774834, "f1": 87.67221720784892}
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/152/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/152",
"html_url": "https://github.com/huggingface/transformers/pull/152",
"diff_url": "https://github.com/huggingface/transformers/pull/152.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/152.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/151 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/151/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/151/comments | https://api.github.com/repos/huggingface/transformers/issues/151/events | https://github.com/huggingface/transformers/issues/151 | 394,673,351 | MDU6SXNzdWUzOTQ2NzMzNTE= | 151 | Using large model with fp16 enable causes the server down | {
"login": "hguan6",
"id": 19914123,
"node_id": "MDQ6VXNlcjE5OTE0MTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/19914123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hguan6",
"html_url": "https://github.com/hguan6",
"followers_url": "https://api.github.com/users/hguan6/followers",
"following_url": "https://api.github.com/users/hguan6/following{/other_user}",
"gists_url": "https://api.github.com/users/hguan6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hguan6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hguan6/subscriptions",
"organizations_url": "https://api.github.com/users/hguan6/orgs",
"repos_url": "https://api.github.com/users/hguan6/repos",
"events_url": "https://api.github.com/users/hguan6/events{/privacy}",
"received_events_url": "https://api.github.com/users/hguan6/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you give more informations such as the command that you are using to run the model and the batch size that you are using? Have you tried reducing it?",
"Hi, @hguan6 try to adjust the batch size and use gradient accumulation (see [this section](https://github.com/huggingface/pytorch-pretrained-BERT#training-large-models-introduction-tools-and-examples) in the readme and the `run_squad` and `run_classifier` examples) if needed."
] | 1,546 | 1,546 | 1,546 | NONE | null | I am using a server with Ubuntu 16.04 and 4 TITAN X GPUs. The server runs the base model with no problems. But it cannot run the large model with 32-bit float point, so I enabled fp16, and the server went down.
(When I successfully ran the base model, it consumes 8G GPU memory for each of the 4 GPUS. ) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/151/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/150 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/150/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/150/comments | https://api.github.com/repos/huggingface/transformers/issues/150/events | https://github.com/huggingface/transformers/issues/150 | 394,596,898 | MDU6SXNzdWUzOTQ1OTY4OTg= | 150 | BertLayerNorm not loaded in CPU mode | {
"login": "tholor",
"id": 1563902,
"node_id": "MDQ6VXNlcjE1NjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tholor",
"html_url": "https://github.com/tholor",
"followers_url": "https://api.github.com/users/tholor/followers",
"following_url": "https://api.github.com/users/tholor/following{/other_user}",
"gists_url": "https://api.github.com/users/tholor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholor/subscriptions",
"organizations_url": "https://api.github.com/users/tholor/orgs",
"repos_url": "https://api.github.com/users/tholor/repos",
"events_url": "https://api.github.com/users/tholor/events{/privacy}",
"received_events_url": "https://api.github.com/users/tholor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @tholor, apex is a GPU specific extension.\r\nWhat kind of use-case do you have in which you have apex installed but no GPU (also fp16 doesn't work on CPU, it's not supported on PyTorch currently)?",
"The two cases I came across this: \r\n1) testing if some code works for both GPU and CPU (on a GPU machine with apex installed)\r\n2) training/debugging small sample models on my laptop. It has a small \"toy GPU\" with only 2 GB RAM and therefore I am usually using the CPUs here. \r\n\r\nI agree that these are edge cases, but I thought the flag `--no_cuda` is intended for exactly such cases?",
"I see. It's a bit tricky because apex is loaded by default when it can be found and this loading is deep inside the library it-self, not the examples ([here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L153)). I don't think it's worth it to add specific logic inside the loading of the library to handle such a case.\r\n\r\nI guess the easiest solution in your case is to have two python environments (with conda or virtualenv) and switch to the one in which apex is not installed when don't want to use GPU.\r\n\r\nFeel free to re-open the issue if this doesn't solve your problem.",
"Sure, then it's not worth the effort.",
"@thomwolf a solution would be to check `torch.cuda.is_available()` and then we can disable apex by using CUDA_VISIBLE_DEVICES=-1",
"Is this also related to the fact then the tests fail when apex is installed?\r\n\r\n```\r\n def forward(self, input, weight, bias):\r\n input_ = input.contiguous()\r\n weight_ = weight.contiguous()\r\n bias_ = bias.contiguous()\r\n output, mean, invvar = fused_layer_norm_cuda.forward_affine(\r\n> input_, self.normalized_shape, weight_, bias_, self.eps)\r\nE RuntimeError: input must be a CUDA tensor (layer_norm_affine at apex/normalization/csrc/layer_norm_cuda.cpp:120)\r\nE frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f754d802021 in /lium/buster1/caglayan/anaconda/envs/bert/lib/python3.6/site-packages/torch/lib/libc10.so)\r\nE frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7f754d8018ea in /lium/buster1/caglayan/anaconda/envs/bert/lib/python3.6/site-packages/torch/lib/libc10.so)\r\nE frame #2: layer_norm_affine(at::Tensor, c10::ArrayRef<long>, at::Tensor, at::Tensor, double) + 0x6b9 (0x7f754a8aafe9 in /lium/buster1/caglayan/anaconda/envs/bert/lib/python3.6/site-pack\r\nages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so)\r\nE frame #3: <unknown function> + 0x19b9d (0x7f754a8b8b9d in /lium/buster1/caglayan/anaconda/envs/bert/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpy\r\nthon-36m-x86_64-linux-gnu.so)\r\nE frame #4: <unknown function> + 0x19d1e (0x7f754a8b8d1e in /lium/buster1/caglayan/anaconda/envs/bert/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpy\r\nthon-36m-x86_64-linux-gnu.so)\r\nE frame #5: <unknown function> + 0x16971 (0x7f754a8b5971 in /lium/buster1/caglayan/anaconda/envs/bert/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpy\r\nthon-36m-x86_64-linux-gnu.so)\r\nE <omitting python frames>\r\nE frame #13: THPFunction_do_forward(THPFunction*, _object*) + 0x15c (0x7f7587d411ec in /lium/buster1/caglayan/anaconda/envs/bert/lib/python3.6/site-packages/torch/lib/libtorch_python.so)\r\n\r\n../../lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/normalization/fused_layer_norm.py:21: RuntimeError\r\n_______________________________________________________________________________ OpenAIGPTModelTest.test_default\r\n```",
"Hello @artemisart,\r\n\r\nWhat do you mean by \"disable apex by CUDA_VISIBLE_DEVICES=-1\" ? I tried to do that but the import still work at [this line](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L153)",
"@LamDang You can set the env CUDA_VISIBLE_DEVICES=-1 to disable cuda in pytorch (ex when you launch your script in bash `CUDA_VISIBLE_DEVICES=-1 python script.py`), and then wrap the import apex with a `if torch.cuda.is_available()` in the script.",
"Hi all, I came across this issue when my GPU memory was fully loaded and had to make some inference at the same time. For this kind of temporary need, the simplest solution for me is just to `touch apex.py` before the run and remove it afterwards.",
"Re-opening this to remember to wrap the apex import with a if `torch.cuda.is_available()` in the next release as advocated by @artemisart ",
"Hello, I pushed a pull request here to solve this issue upstream https://github.com/NVIDIA/apex/pull/256\r\n\r\nUpdate: it is merged into apex",
"> Re-opening this to remember to wrap the apex import with a if `torch.cuda.is_available()` in the next release as advocated by @artemisart\r\n\r\nYes please, I also struggle with Apex in CPU mode, i have wrapped Bertmode in my object and when I tried to load the pretrained GPU model with torch.load(model, map_location='cpu') , it shows 'no module named apex' but if I install apex, I get no cuda error(I'm on a CPU machine in inference phase )",
"Well it should be solved in apex now. What is the exact error message you have ?\r\nBy the way, not using apex is also fine, don't worry about it if you don't need t.",
"I got\r\n`model = torch.load(model_file, map_location='cpu')`\r\n` result = unpickler.load()\r\nModuleNotFoundError: No module named 'apex' `\r\n\r\nmodel_file is a pretrained object with GPU with a bertmodel field , but I want to unpickle it in CPU mode",
"Try to use pytorch recommended serialization practice (saving/loading the state dict):\r\nhttps://pytorch.org/docs/stable/notes/serialization.html",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,545 | 1,561 | 1,561 | CONTRIBUTOR | null | I am running into an exception when loading a model on CPU in one of the example scripts. I suppose this is related to loading the FusedLayerNorm from apex, even when `--no_cuda` has been set.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/8da280ebbeca5ebd7561fd05af78c65df9161f92/pytorch_pretrained_bert/modeling.py#L154
Or is this working for anybody else?
Example:
```
run_classifier.py --data_dir glue/CoLA --task_name CoLA --do_train --do_eval --bert_model bert-base-cased --max_seq_length 32 --train_batch_size 12 --learning_rate 2e-5 --num_train_epochs 2.0 --output_dir /tmp/mrpc_output/ --no_cuda
```
Exception:
```
[...]
File "/home/mp/miniconda3/envs/bert/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/normalization/fused_layer_norm.py", line 19, in forward
input_, self.normalized_shape, weight_, bias_, self.eps)
RuntimeError: input must be a CUDA tensor (layer_norm_affine at apex/normalization/csrc/layer_norm_cuda.cpp:120)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x45 (0x7fe35f6e4cc5 in /home/mp/miniconda3/envs/bert/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: layer_norm_affine(at::Tensor, c10::ArrayRef<long>, at::Tensor, at::Tensor, double) + 0x4bc (0x7fe3591456ac in /home/mp/miniconda3/envs/bert/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #2: <unknown function> + 0x18db4 (0x7fe359152db4 in /home/mp/miniconda3/envs/bert/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #3: <unknown function> + 0x16505 (0x7fe359150505 in /home/mp/miniconda3/envs/bert/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so)
<omitting python frames>
frame #12: THPFunction_do_forward(THPFunction*, _object*) + 0x15c (0x7fe38fb7db7c in /home/mp/miniconda3/envs/bert/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/150/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/149 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/149/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/149/comments | https://api.github.com/repos/huggingface/transformers/issues/149/events | https://github.com/huggingface/transformers/issues/149 | 394,507,967 | MDU6SXNzdWUzOTQ1MDc5Njc= | 149 | Speedup using NVIDIA Apex | {
"login": "llidev",
"id": 29957883,
"node_id": "MDQ6VXNlcjI5OTU3ODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/29957883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/llidev",
"html_url": "https://github.com/llidev",
"followers_url": "https://api.github.com/users/llidev/followers",
"following_url": "https://api.github.com/users/llidev/following{/other_user}",
"gists_url": "https://api.github.com/users/llidev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/llidev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/llidev/subscriptions",
"organizations_url": "https://api.github.com/users/llidev/orgs",
"repos_url": "https://api.github.com/users/llidev/repos",
"events_url": "https://api.github.com/users/llidev/events{/privacy}",
"received_events_url": "https://api.github.com/users/llidev/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe. You can try with pytorch docker image `dockerhub 1.0-cuda10.0-cudnn7` to debug, as we did in the discussion in PR #116.",
"Just verified that CUDA10.0 makes 4x speedup. It should be better to include this in the main document.",
"What GPU do you run ... and how do you increase such a speedup? Is this possible with gtx 1080?\r\n\r\nThanks"
] | 1,545 | 1,563 | 1,546 | CONTRIBUTOR | null | Hi,
According to PR https://github.com/huggingface/pytorch-pretrained-BERT/pull/116, we should be able to achieve a 3-4 x speed up for both bert-base and bert-large. However, I can only achieve 2x speed up with bert-base. My docker image uses CUDA9.0 while the discussion in the PR https://github.com/huggingface/pytorch-pretrained-BERT/pull/116 is based on CUDA10.0... I am wondering if that makes the difference....
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/149/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/149/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/148 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/148/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/148/comments | https://api.github.com/repos/huggingface/transformers/issues/148/events | https://github.com/huggingface/transformers/issues/148 | 394,310,682 | MDU6SXNzdWUzOTQzMTA2ODI= | 148 | Embeddings from BERT for original tokens | {
"login": "nihalnayak",
"id": 5679782,
"node_id": "MDQ6VXNlcjU2Nzk3ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5679782?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nihalnayak",
"html_url": "https://github.com/nihalnayak",
"followers_url": "https://api.github.com/users/nihalnayak/followers",
"following_url": "https://api.github.com/users/nihalnayak/following{/other_user}",
"gists_url": "https://api.github.com/users/nihalnayak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nihalnayak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nihalnayak/subscriptions",
"organizations_url": "https://api.github.com/users/nihalnayak/orgs",
"repos_url": "https://api.github.com/users/nihalnayak/repos",
"events_url": "https://api.github.com/users/nihalnayak/events{/privacy}",
"received_events_url": "https://api.github.com/users/nihalnayak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, you should read the discussion in #64. I left this issue open for reference on these questions.\r\nDon't hesitate to participate there."
] | 1,545 | 1,545 | 1,545 | NONE | null | I am trying out the `extract_features.py` example program. I noticed that a sentence gets split into tokens and the embeddings are generated. For example, if you had the sentence βDefinitely notβ, and the corresponding workpieces can be [βDefβ, β##inβ, β##iteβ, β##lyβ, βnotβ]. It then generates the embeddings for these tokens.
My question is how do I train an NER system on CoNLL dataset?
I want to extract embeddings for original tokens for training an NER with a neural architecture. If you have come across any resource that gives a clear explanation on how to carry this out, post it here. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/148/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/147 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/147/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/147/comments | https://api.github.com/repos/huggingface/transformers/issues/147/events | https://github.com/huggingface/transformers/issues/147 | 394,064,499 | MDU6SXNzdWUzOTQwNjQ0OTk= | 147 | Does the final hidden state contains the <CLS> for Squad2.0 | {
"login": "SparkJiao",
"id": 16469472,
"node_id": "MDQ6VXNlcjE2NDY5NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/16469472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SparkJiao",
"html_url": "https://github.com/SparkJiao",
"followers_url": "https://api.github.com/users/SparkJiao/followers",
"following_url": "https://api.github.com/users/SparkJiao/following{/other_user}",
"gists_url": "https://api.github.com/users/SparkJiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SparkJiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SparkJiao/subscriptions",
"organizations_url": "https://api.github.com/users/SparkJiao/orgs",
"repos_url": "https://api.github.com/users/SparkJiao/repos",
"events_url": "https://api.github.com/users/SparkJiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/SparkJiao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm sorry that I have found a bug in my code. I have invalidly called a attribute of the `InputFeature` but it have run successfully. Now I have fixed it and re-run it. If I have more questions I will reopen this. Sorry to bother you!"
] | 1,545 | 1,545 | 1,545 | NONE | null | Recently I'm modifying the `run_squad.py` to run on CoQA. In the implementation of TensorFlow from Google, they use the probability on the first token of a context segment, where is the location of `<CLS>` to as the that of the question is unanswerable. So I try to modified the `run_squad.py` in your implementation as this. But when I looked at the predictions, I have found that many questions answers are the first word of the context not the first token, <CLS>, so I wanna know if your implementation have removed the hidden state of start token and end token? Or there may be other problems ? Thank you a lot! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/147/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/146 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/146/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/146/comments | https://api.github.com/repos/huggingface/transformers/issues/146/events | https://github.com/huggingface/transformers/issues/146 | 393,876,320 | MDU6SXNzdWUzOTM4NzYzMjA= | 146 | BertForQuestionAnswering: Predicting span on the question? | {
"login": "valsworthen",
"id": 18659328,
"node_id": "MDQ6VXNlcjE4NjU5MzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/18659328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/valsworthen",
"html_url": "https://github.com/valsworthen",
"followers_url": "https://api.github.com/users/valsworthen/followers",
"following_url": "https://api.github.com/users/valsworthen/following{/other_user}",
"gists_url": "https://api.github.com/users/valsworthen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/valsworthen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/valsworthen/subscriptions",
"organizations_url": "https://api.github.com/users/valsworthen/orgs",
"repos_url": "https://api.github.com/users/valsworthen/repos",
"events_url": "https://api.github.com/users/valsworthen/events{/privacy}",
"received_events_url": "https://api.github.com/users/valsworthen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is the original behavior from the TF implementation.\r\nThe predictions are filtered afterward (in `write_predictions`) so this is probably not a big issue.\r\nMaybe try with another behavior and see if it improve upon the results?"
] | 1,545 | 1,545 | 1,545 | NONE | null | Hello,
I have a question regarding the `BertForQuestionAnswering` implementation. If I am not mistaken, for this model the sequence should be of the form `Question tokens [SEP] Passage tokens`. Therefore, the embedded representation computed by `BertModel` returns the states of both the question and the passage (a tensor of length `passage + question + 1`).
If I am not mistaken, the span logits are then calculated for the whole sequence, i.e. **they can be calculated for the question** even if the answer is always in the passage (see [the model code](https://github.com/huggingface/pytorch-pretrained-BERT/blob/8da280ebbeca5ebd7561fd05af78c65df9161f92/pytorch_pretrained_bert/modeling.py#L1097) and the [squad script](https://github.com/huggingface/pytorch-pretrained-BERT/blob/8da280ebbeca5ebd7561fd05af78c65df9161f92/examples/run_squad.py#L899)). I wonder if this behavior is really desirable. Doesn't it confuse the model?
Thank you for your work! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/146/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/145 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/145/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/145/comments | https://api.github.com/repos/huggingface/transformers/issues/145/events | https://github.com/huggingface/transformers/pull/145 | 393,669,641 | MDExOlB1bGxSZXF1ZXN0MjQwNjMyNjY5 | 145 | Correct the wrong note | {
"login": "wlhgtc",
"id": 16603773,
"node_id": "MDQ6VXNlcjE2NjAzNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wlhgtc",
"html_url": "https://github.com/wlhgtc",
"followers_url": "https://api.github.com/users/wlhgtc/followers",
"following_url": "https://api.github.com/users/wlhgtc/following{/other_user}",
"gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions",
"organizations_url": "https://api.github.com/users/wlhgtc/orgs",
"repos_url": "https://api.github.com/users/wlhgtc/repos",
"events_url": "https://api.github.com/users/wlhgtc/events{/privacy}",
"received_events_url": "https://api.github.com/users/wlhgtc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,545 | 1,546 | 1,546 | CONTRIBUTOR | null | Correct the wrong note in #144 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/145/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/145",
"html_url": "https://github.com/huggingface/transformers/pull/145",
"diff_url": "https://github.com/huggingface/transformers/pull/145.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/145.patch",
"merged_at": 1546860186000
} |
https://api.github.com/repos/huggingface/transformers/issues/144 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/144/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/144/comments | https://api.github.com/repos/huggingface/transformers/issues/144/events | https://github.com/huggingface/transformers/issues/144 | 393,669,200 | MDU6SXNzdWUzOTM2NjkyMDA= | 144 | Some questions in Loss Function for MaskedLM | {
"login": "wlhgtc",
"id": 16603773,
"node_id": "MDQ6VXNlcjE2NjAzNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wlhgtc",
"html_url": "https://github.com/wlhgtc",
"followers_url": "https://api.github.com/users/wlhgtc/followers",
"following_url": "https://api.github.com/users/wlhgtc/following{/other_user}",
"gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions",
"organizations_url": "https://api.github.com/users/wlhgtc/orgs",
"repos_url": "https://api.github.com/users/wlhgtc/repos",
"events_url": "https://api.github.com/users/wlhgtc/events{/privacy}",
"received_events_url": "https://api.github.com/users/wlhgtc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@julien-c Seem some conflict with the original BERT in tf.\r\nThe code in tf is as follows:\r\n\r\n```\r\ndef gather_indexes(sequence_tensor, positions):\r\n \"\"\"Gathers the vectors at the specific positions over a minibatch.\"\"\"\r\n...\r\ninput_tensor = gather_indexes(input_tensor, positions)\r\n```\r\n\r\nOr dose you mean that we could set all words that are not masked(random pick from the sentence) and the padding(add to reach max_length) to \"-1\"(in order to ignore)?",
"> Q1:[...] But in my opinion, we only need to calculate the masked word's loss, not the whole sentence?\r\n\r\nIt's exactly what is done in the current implementation. The labels of not masked tokens are set to -1 and the loss function ignores those tokens by setting ignore_index=-1 (see [documentation](https://pytorch.org/docs/stable/nn.html#crossentropyloss))\r\n\r\n> Q2.\r\nIt's also a question about masked, \"chooses 15% of tokens at random\" in the paper, I don't know how to understand it... For each word, a probability of 15% to be masked or just 15% of the sentence is masked?\r\n\r\nEach token has a probability of 15% of getting masked. You might wanna checkout [this code](https://github.com/deepset-ai/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L288) to get a better understanding ",
"So nice to see your reply, it do fix my problem,4K.\r\n",
"@tholor I want to rebuild BERT on a single GPU, still some problems. May I know your email address ?",
"malte.pietsch [at] deepset.ai \r\nBut if you have issues that are of interest for others, please use github.",
"Thanks @tholor!"
] | 1,545 | 1,584 | 1,546 | CONTRIBUTOR | null | Use the same sentence in your **Usage** Section:
```
# Tokenized input
text = "Who was Jim Henson ? Jim Henson was a puppeteer"
tokenized_text = tokenizer.tokenize(text)
# Mask a token that we will try to predict back with `BertForMaskedLM`
masked_index = 6
tokenized_text[masked_index] = '[MASK]'
```
Q1.
When we use this sentence as training dataοΌaccording to your code
```
if masked_lm_labels is not None:
loss_fct = CrossEntropyLoss(ignore_index=-1)
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), masked_lm_labels.view(-1))
return masked_lm_loss
```
seem the loss is a sum of all word in this sentence, not the single word "henson", am I right? But in my opinion, we only need to calculate the **masked** word's loss, not the whole sentence?
Q2.
It's also a question about masked, "chooses 15% of tokens at random" in the paper, I don't know how to understand it... For each word, a probability of 15% to be masked or just 15% of the sentence is masked?
Hope you could help me fix them.
By the way, the notes in line 731 in: pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py should be : if `masked_lm_labels` is not `None`, missed a word "not". | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/144/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/143 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/143/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/143/comments | https://api.github.com/repos/huggingface/transformers/issues/143/events | https://github.com/huggingface/transformers/issues/143 | 393,365,633 | MDU6SXNzdWUzOTMzNjU2MzM= | 143 | bug in init_bert_weights | {
"login": "mjc14",
"id": 15847067,
"node_id": "MDQ6VXNlcjE1ODQ3MDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/15847067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mjc14",
"html_url": "https://github.com/mjc14",
"followers_url": "https://api.github.com/users/mjc14/followers",
"following_url": "https://api.github.com/users/mjc14/following{/other_user}",
"gists_url": "https://api.github.com/users/mjc14/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mjc14/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mjc14/subscriptions",
"organizations_url": "https://api.github.com/users/mjc14/orgs",
"repos_url": "https://api.github.com/users/mjc14/repos",
"events_url": "https://api.github.com/users/mjc14/events{/privacy}",
"received_events_url": "https://api.github.com/users/mjc14/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Fixed, thanks"
] | 1,545 | 1,546 | 1,546 | NONE | null | hi ,
there is a bug in init_bert_weights().
the BERTLayerNorm has twice init, the first init is in the BERTLayerNorm module __init__(). the second init in init_bert_weights().
if you want to get pre-training model that is not from google model, the second init will lead to bad convergence in my experiment γ gamma is variance , beta is mean, there are usually 1 and 0. the second init change it.
first:
self.gamma = nn.Parameter(torch.ones(config.hidden_size))
self.beta = nn.Parameter(torch.zeros(config.hidden_size))
second:
elif isinstance(module, BERTLayerNorm):
module.beta.data.normal_(mean=0.0, std=config.initializer_range)
module.gamma.data.normal_(mean=0.0, std=config.initializer_range)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/143/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/142 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/142/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/142/comments | https://api.github.com/repos/huggingface/transformers/issues/142/events | https://github.com/huggingface/transformers/pull/142 | 393,327,036 | MDExOlB1bGxSZXF1ZXN0MjQwMzc5MzM3 | 142 | change in run_classifier.py | {
"login": "PatriciaRodrigues1994",
"id": 35722233,
"node_id": "MDQ6VXNlcjM1NzIyMjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/35722233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PatriciaRodrigues1994",
"html_url": "https://github.com/PatriciaRodrigues1994",
"followers_url": "https://api.github.com/users/PatriciaRodrigues1994/followers",
"following_url": "https://api.github.com/users/PatriciaRodrigues1994/following{/other_user}",
"gists_url": "https://api.github.com/users/PatriciaRodrigues1994/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PatriciaRodrigues1994/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PatriciaRodrigues1994/subscriptions",
"organizations_url": "https://api.github.com/users/PatriciaRodrigues1994/orgs",
"repos_url": "https://api.github.com/users/PatriciaRodrigues1994/repos",
"events_url": "https://api.github.com/users/PatriciaRodrigues1994/events{/privacy}",
"received_events_url": "https://api.github.com/users/PatriciaRodrigues1994/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks! #141 already addressed this problem."
] | 1,545 | 1,546 | 1,546 | NONE | null | while running the dev set for multi-label classification (more than two), it gives an assertion error. Specifying the num_labels while creating the model again for eval testing solves this problem. Thus the only change is while running classification for multi label is in the starting dict of num labels and specifying the base classes in DataProcessor class that is chosen. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/142/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/142/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/142",
"html_url": "https://github.com/huggingface/transformers/pull/142",
"diff_url": "https://github.com/huggingface/transformers/pull/142.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/142.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/141 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/141/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/141/comments | https://api.github.com/repos/huggingface/transformers/issues/141/events | https://github.com/huggingface/transformers/pull/141 | 393,226,114 | MDExOlB1bGxSZXF1ZXN0MjQwMzEzMjkz | 141 | loading saved model when n_classes != 2 | {
"login": "SinghJasdeep",
"id": 33911313,
"node_id": "MDQ6VXNlcjMzOTExMzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/33911313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SinghJasdeep",
"html_url": "https://github.com/SinghJasdeep",
"followers_url": "https://api.github.com/users/SinghJasdeep/followers",
"following_url": "https://api.github.com/users/SinghJasdeep/following{/other_user}",
"gists_url": "https://api.github.com/users/SinghJasdeep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SinghJasdeep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SinghJasdeep/subscriptions",
"organizations_url": "https://api.github.com/users/SinghJasdeep/orgs",
"repos_url": "https://api.github.com/users/SinghJasdeep/repos",
"events_url": "https://api.github.com/users/SinghJasdeep/events{/privacy}",
"received_events_url": "https://api.github.com/users/SinghJasdeep/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This problem is discussed is #135 and I don't think that this is the right way to patch this problem. The saved model contains the `num_labels` information.",
"cf discussion in #135 let's go for the `mandatory-argument`solution for now."
] | 1,545 | 1,546 | 1,546 | CONTRIBUTOR | null | Required to for: Assertion `t >= 0 && t < n_classes` failed, if your default number of classes is not 2. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/141/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/141",
"html_url": "https://github.com/huggingface/transformers/pull/141",
"diff_url": "https://github.com/huggingface/transformers/pull/141.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/141.patch",
"merged_at": 1546860073000
} |
https://api.github.com/repos/huggingface/transformers/issues/140 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/140/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/140/comments | https://api.github.com/repos/huggingface/transformers/issues/140/events | https://github.com/huggingface/transformers/issues/140 | 393,167,870 | MDU6SXNzdWUzOTMxNjc4NzA= | 140 | Not able to use FP16 in pytorch-pretrained-BERT. Getting error **Runtime error: Expected scalar type object Half but got scalar type Float for argument #2 target** | {
"login": "Ashish-Gupta03",
"id": 7694700,
"node_id": "MDQ6VXNlcjc2OTQ3MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7694700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ashish-Gupta03",
"html_url": "https://github.com/Ashish-Gupta03",
"followers_url": "https://api.github.com/users/Ashish-Gupta03/followers",
"following_url": "https://api.github.com/users/Ashish-Gupta03/following{/other_user}",
"gists_url": "https://api.github.com/users/Ashish-Gupta03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ashish-Gupta03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ashish-Gupta03/subscriptions",
"organizations_url": "https://api.github.com/users/Ashish-Gupta03/orgs",
"repos_url": "https://api.github.com/users/Ashish-Gupta03/repos",
"events_url": "https://api.github.com/users/Ashish-Gupta03/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ashish-Gupta03/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Which kind of GPU are you using? `fp16` only works on recent GPU (better with Tesla and Volta series).",
"I experienced a similar issue with CUDA 9.1. Using 9.2 solved this for me. ",
"Yes, CUDA 10 is recommended for using fp16 with good performances."
] | 1,545 | 1,546 | 1,546 | NONE | null | I'm not able to work with FP16 for pytorch BERT code. Particularly for BertForSequenceClassification, which I tried and got the issue
**Runtime error: Expected scalar type object Half but got scalar type Float for argument #2 target**
when I enabled fp16.
Also when using
`logits = logits.half()
labels = labels.half()`
then the epoch time also increased.
The training time without fp16 was 2.5 hrs per epoch after doing logits.half() and labels.half() the runtime per epoch shot up to 8hrs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/140/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/139 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/139/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/139/comments | https://api.github.com/repos/huggingface/transformers/issues/139/events | https://github.com/huggingface/transformers/issues/139 | 393,167,784 | MDU6SXNzdWUzOTMxNjc3ODQ= | 139 | Not able to use FP16 in pytorch-pretrained-BERT | {
"login": "Ashish-Gupta03",
"id": 7694700,
"node_id": "MDQ6VXNlcjc2OTQ3MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7694700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ashish-Gupta03",
"html_url": "https://github.com/Ashish-Gupta03",
"followers_url": "https://api.github.com/users/Ashish-Gupta03/followers",
"following_url": "https://api.github.com/users/Ashish-Gupta03/following{/other_user}",
"gists_url": "https://api.github.com/users/Ashish-Gupta03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ashish-Gupta03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ashish-Gupta03/subscriptions",
"organizations_url": "https://api.github.com/users/Ashish-Gupta03/orgs",
"repos_url": "https://api.github.com/users/Ashish-Gupta03/repos",
"events_url": "https://api.github.com/users/Ashish-Gupta03/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ashish-Gupta03/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,545 | 1,545 | 1,545 | NONE | null | I'm not able to work with FP16 for pytorch BERT code. Particularly for BertForSequenceClassification, which I tried and got the issue
**Runtime error: Expected scalar type object Half but got scalar type Float for argument #2 target**
when I enabled fp16.
Also when using
`logits = logits.half()
labels = labels.half()`
then the epoch time also increased.
_Originally posted by @Ashish-Gupta03 in https://github.com/huggingface/pytorch-pretrained-BERT/issue_comments#issuecomment-449096213_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/139/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/138 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/138/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/138/comments | https://api.github.com/repos/huggingface/transformers/issues/138/events | https://github.com/huggingface/transformers/issues/138 | 393,142,144 | MDU6SXNzdWUzOTMxNDIxNDQ= | 138 | Problem loading finetuned model for squad | {
"login": "ni40in",
"id": 9155183,
"node_id": "MDQ6VXNlcjkxNTUxODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9155183?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ni40in",
"html_url": "https://github.com/ni40in",
"followers_url": "https://api.github.com/users/ni40in/followers",
"following_url": "https://api.github.com/users/ni40in/following{/other_user}",
"gists_url": "https://api.github.com/users/ni40in/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ni40in/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ni40in/subscriptions",
"organizations_url": "https://api.github.com/users/ni40in/orgs",
"repos_url": "https://api.github.com/users/ni40in/repos",
"events_url": "https://api.github.com/users/ni40in/events{/privacy}",
"received_events_url": "https://api.github.com/users/ni40in/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Judging from the error message, I would say that the error is caused by the following line: https://github.com/huggingface/pytorch-pretrained-BERT/blob/7fb94ab934b2ad1041613fc93c61d13105faf98a/pytorch_pretrained_bert/modeling.py#L541\r\n\r\nApparently, the proper way to save a model is the following one:\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/7fb94ab934b2ad1041613fc93c61d13105faf98a/examples/run_classifier.py#L554-L557\r\n\r\nIs this what you are doing?",
"hi @rodgzilla i see that model is being saved the same way in squad.py:\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/7fb94ab934b2ad1041613fc93c61d13105faf98a/examples/run_squad.py#L918-L921\r\nso the problem must be elsewhere",
"I run into the same problem, using the pytorch_model.bin generated by `run_classifier.py`:\r\n\r\n```bash\r\n!python pytorch-pretrained-BERT/examples/run_classifier.py \\\r\n --task_name=MRPC \\\r\n --do_train \\\r\n --do_eval \\\r\n --data_dir=./ \\\r\n --bert_model=bert-base-chinese \\\r\n --max_seq_length=64 \\\r\n --train_batch_size=32 \\\r\n --learning_rate=2e-5 \\\r\n --num_train_epochs=3.0 \\\r\n --output_dir=./models/\r\n```\r\n\r\nAnd try to load the fine-tuned model:\r\n\r\n```py\r\nfrom pytorch_pretrained_bert import modeling\r\nfrom pytorch_pretrained_bert import BertForSequenceClassification\r\n\r\n# Load pre-trained model (weights)\r\nconfig = modeling.BertConfig(\r\n vocab_size_or_config_json_file=21128,\r\n hidden_size=768,\r\n num_hidden_layers=12,\r\n num_attention_heads=12,\r\n intermediate_size=3072,\r\n hidden_act=\"gelu\",\r\n hidden_dropout_prob=0.1,\r\n attention_probs_dropout_prob=0.1,\r\n max_position_embeddings=512,\r\n type_vocab_size=2,\r\n initializer_range=0.02)\r\n\r\nmodel = BertForSequenceClassification(config)\r\n\r\nmodel_state_dict = \"models/pytorch_model.bin\"\r\nmodel.bert.load_state_dict(torch.load(model_state_dict))\r\n```\r\n\r\n```py\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-22-cdc19dc2541c> in <module>()\r\n 20 # issues: https://github.com/huggingface/pytorch-pretrained-BERT/issues/138\r\n 21 model_state_dict = \"models/pytorch_model.bin\"\r\n---> 22 model.bert.load_state_dict(torch.load(model_state_dict))\r\n\r\n/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)\r\n 767 if len(error_msgs) > 0:\r\n 768 raise RuntimeError('Error(s) in loading state_dict for {}:\\n\\t{}'.format(\r\n--> 769 self.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\n 770 \r\n 771 def _named_members(self, get_members_fn, prefix='', recurse=True):\r\n\r\nRuntimeError: Error(s) in loading state_dict for BertModel:\r\n\tMissing key(s) in state_dict: \"embeddings.word_embeddings.weight\", \"embeddings.position_embeddings.weight\", \"embeddings.token_type_embeddings.weight\", \"embeddings.LayerNorm.weight\", \"embeddings.LayerNorm.bias\", \"encoder.layer.0.attention.self.query.weight\", \"encoder.layer.0.attention.self.query.bias\", \"encoder.layer.0.attention.self.key.weight\", \"encoder.layer.0.attention.self.key.bias\", \"encoder.layer.0.attention.self.value.weight\", \"encoder.layer.0.attention.self.value.bias\", \"encoder.layer.0.attention.output.dense.weight\", \"encoder.layer.0.attention.output.dense.bias\", \"encoder.layer.0.attention.output.LayerNorm.weight\", \"encoder.layer.0.attention.output.LayerNorm.bias\", \"encoder.layer.0.intermediate.dense.weight\", \"encoder.layer.0.intermediate.dense.bias\", \"encoder.layer.0.output.dense.weight\", \"encoder.layer.0.output.dense.bias\", \"encoder.layer.0.output.LayerNorm.weight\", \"encoder.layer.0.output.LayerNorm.bias\", \"encoder.layer.1.attention.self.query.weight\", \"encoder.layer.1.attention.self.query.bias\", \"encoder.layer.1.attention.self.key.weight\", \"encoder.layer.1.attention.self.key.bias\", \"encoder.layer.1.attention.self.value.weight\", \"encoder.layer.1.attention.self.value.bias\", \"encoder.layer.1.attention.output.dense.weight\", \"encoder.layer.1.attention.output.dense.bias\", \"encoder.layer.1.attention.output.LayerNorm.weight\", \"encoder.layer.1.attention.output.LayerNorm.bias\", \"encoder.layer.1.intermediate.dense.weight\", \"encoder.layer.1.intermediate.dense.bias\", \"enco...\r\n\tUnexpected key(s) in state_dict: \"bert.embeddings.word_embeddings.weight\", \"bert.embeddings.position_embeddings.weight\", \"bert.embeddings.token_type_embeddings.weight\", \"bert.embeddings.LayerNorm.weight\", \"bert.embeddings.LayerNorm.bias\", \"bert.encoder.layer.0.attention.self.query.weight\", \"bert.encoder.layer.0.attention.self.query.bias\", \"bert.encoder.layer.0.attention.self.key.weight\", \"bert.encoder.layer.0.attention.self.key.bias\", \"bert.encoder.layer.0.attention.self.value.weight\", \"bert.encoder.layer.0.attention.self.value.bias\", \"bert.encoder.layer.0.attention.output.dense.weight\", \"bert.encoder.layer.0.attention.output.dense.bias\", \"bert.encoder.layer.0.attention.output.LayerNorm.weight\", \"bert.encoder.layer.0.attention.output.LayerNorm.bias\", \"bert.encoder.layer.0.intermediate.dense.weight\", \"bert.encoder.layer.0.intermediate.dense.bias\", \"bert.encoder.layer.0.output.dense.weight\", \"bert.encoder.layer.0.output.dense.bias\", \"bert.encoder.layer.0.output.LayerNorm.weight\", \"bert.encoder.layer.0.output.LayerNorm.bias\", \"bert.encoder.layer.1.attention.self.query.weight\", \"bert.encoder.layer.1.attention.self.query.bias\", \"bert.encoder.layer.1.attention.self.key.weight\", \"bert.encoder.layer.1.attention.self.key.bias\", \"bert.encoder.layer.1.attention.self.value.weight\", \"bert.encoder.layer.1.attention.self.value.bias\", \"bert.encoder.layer.1.attention.output.dense.weight\", \"bert.encoder.layer.1.attention.output.dense.bias\", \"bert.encoder.layer.1.attention.output.LayerNorm....\r\n```\r\n\r\nHow can I load a fine-tuned model?",
"Hi, here the problem is not with the saving of the model but the loading.\r\n\r\nYou should just use\r\n```\r\nmodel.load_state_dict(torch.load(model_state_dict))\r\n```\r\nand not\r\n```\r\nmodel.bert.load_state_dict(torch.load(model_state_dict))\r\n```\r\n\r\nAlternatively, here is an example on how to save and then load a model using `from_pretrained`:\r\n\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/2e4db64cab198dc241e18221ef088908f2587c61/examples/run_squad.py#L916-L924"
] | 1,545 | 1,546 | 1,546 | NONE | null | Hi,
i'm trying to load a fine tuned model for question answering which i trained with squad.py:
```
import torch
from pytorch_pretrained_bert import BertModel, BertForQuestionAnswering
from pytorch_pretrained_bert import modeling
config = modeling.BertConfig(attention_probs_dropout_prob=0.1, hidden_dropout_prob=0.1, hidden_size=768, initializer_range=0.02, intermediate_size=3072, max_position_embeddings=512, num_attention_heads=12, num_hidden_layers=12, vocab_size_or_config_json_file=30522)
model = modeling.BertForQuestionAnswering(config)
model_state_dict = "/home/ubuntu/bert_squad/bert_fine_121918/pytorch_model.bin"
model.bert.load_state_dict(torch.load(model_state_dict))
```
but receiving an error on the last line:
> Error(s) in loading state_dict for BertModel:
> Missing key(s) in state_dict: "embeddings.word_embeddings.weight", "embeddings.position_embeddings.weight", "embeddings.token_type_embeddings.weight", "embeddings.LayerNorm.weight", "embeddings.LayerNorm.bias", "encoder.layer.0.attention.self.query.weight",....
> Unexpected key(s) in state_dict: "bert.embeddings.word_embeddings.weight", "bert.embeddings.position_embeddings.weight", "bert.embeddings.token_type_embeddings.weight", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.self.query.weight",....
it looks like model definition is not in expected format. Could you direct me on what went wrong? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/138/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/138/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/137 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/137/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/137/comments | https://api.github.com/repos/huggingface/transformers/issues/137/events | https://github.com/huggingface/transformers/issues/137 | 393,079,924 | MDU6SXNzdWUzOTMwNzk5MjQ= | 137 | run_squad.py without GPU.. Without CUPY | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@SandeepBhutani What was the conclusion of this issue? ",
"Is this issue still open.. It can be closed.. It was an environment issue..\n\nOn Sat, 13 Jul, 2019, 5:00 AM Peter, <[email protected]> wrote:\n\n> @SandeepBhutani <https://github.com/SandeepBhutani> What was the\n> conclusion of this issue?\n>\n> β\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-pretrained-BERT/issues/137?email_source=notifications&email_token=AHRBKIGTRSIMUPOHIXZNKWDP7EHYNA5CNFSM4GLRLKFKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZ3DFRY#issuecomment-511062727>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AHRBKIEAQAEZLMPO2JPH64TP7EHYNANCNFSM4GLRLKFA>\n> .\n>\n"
] | 1,545 | 1,563 | 1,545 | NONE | null | I am trying to run_squad.py for QnA (Squad) case. Its dependency is on GPU.. i.e., cupy is to be installed.
In one of my environment I dont have GPU therefore cupy is not getting installed and I am not able to proceed with training.
Can I train on CPU itself?
following is I am trying to run:
```
python run_squad.py \
--bert_model bert-base-uncased \
--do_train \
--do_predict \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
```
and I get following error:
```
RuntimeError: CUDA environment is not correctly set up
(see https://github.com/chainer/chainer#installation).No module named 'cupy'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/137/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/136 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/136/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/136/comments | https://api.github.com/repos/huggingface/transformers/issues/136/events | https://github.com/huggingface/transformers/issues/136 | 393,058,463 | MDU6SXNzdWUzOTMwNTg0NjM= | 136 | It's possible to avoid download the pretrained model? | {
"login": "rxy1212",
"id": 14829556,
"node_id": "MDQ6VXNlcjE0ODI5NTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/14829556?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rxy1212",
"html_url": "https://github.com/rxy1212",
"followers_url": "https://api.github.com/users/rxy1212/followers",
"following_url": "https://api.github.com/users/rxy1212/following{/other_user}",
"gists_url": "https://api.github.com/users/rxy1212/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rxy1212/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rxy1212/subscriptions",
"organizations_url": "https://api.github.com/users/rxy1212/orgs",
"repos_url": "https://api.github.com/users/rxy1212/repos",
"events_url": "https://api.github.com/users/rxy1212/events{/privacy}",
"received_events_url": "https://api.github.com/users/rxy1212/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I just find the way.",
"@rxy1212 could you explain the method used ",
"@makkunda \r\nIn `modeling.py`, you can find this codes\r\n```\r\nPRETRAINED_MODEL_ARCHIVE_MAP = {\r\n 'bert-base-uncased': \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz\",\r\n 'bert-large-uncased': \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased.tar.gz\",\r\n 'bert-base-cased': \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased.tar.gz\",\r\n 'bert-large-cased': \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased.tar.gz\",\r\n 'bert-base-multilingual-uncased': \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased.tar.gz\",\r\n 'bert-base-multilingual-cased': \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased.tar.gz\",\r\n 'bert-base-chinese': \"https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese.tar.gz\",\r\n}\r\n```\r\n\r\njust download the model you need by the url and unzip it, then you will get `bert_config.json` and `pytorch_model.bin`. You can put them in a folder X.\r\nNow, you can use `model = BertModel.from_pretrained('THE-PATH-OF-X')`\r\n"
] | 1,545 | 1,545 | 1,545 | NONE | null | When I run this code `model = BertModel.from_pretrained('bert-base-uncased')` , it would download a big file and sometimes that's very slow. Now I have download the model from [https://github.com/google-research/bert](url). So, It's possible to avoid download the pretrained model when I use pytorch-pretrained-BERT at the first time? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/136/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/135 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/135/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/135/comments | https://api.github.com/repos/huggingface/transformers/issues/135/events | https://github.com/huggingface/transformers/issues/135 | 393,055,660 | MDU6SXNzdWUzOTMwNTU2NjA= | 135 | Problem loading a finetuned model. | {
"login": "rodgzilla",
"id": 12107203,
"node_id": "MDQ6VXNlcjEyMTA3MjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/12107203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rodgzilla",
"html_url": "https://github.com/rodgzilla",
"followers_url": "https://api.github.com/users/rodgzilla/followers",
"following_url": "https://api.github.com/users/rodgzilla/following{/other_user}",
"gists_url": "https://api.github.com/users/rodgzilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rodgzilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rodgzilla/subscriptions",
"organizations_url": "https://api.github.com/users/rodgzilla/orgs",
"repos_url": "https://api.github.com/users/rodgzilla/repos",
"events_url": "https://api.github.com/users/rodgzilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/rodgzilla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok I managed to find the problem. It comes from:\r\n\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/7fb94ab934b2ad1041613fc93c61d13105faf98a/pytorch_pretrained_bert/modeling.py#L534-L540\r\n\r\nWhen trying to load `classifier.weight` and `classifier.bias`, the following line gets added to `error_msgs`: \r\n```\r\nsize mismatch for classifier.weight: copying a param with shape torch.Size([16, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]).\r\nsize mismatch for classifier.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([2]).\r\n```\r\nFirst, I think that we should add a check of `error_msgs` to `from_pretrained`. I don't really know if there is any other way than printing an error message and existing the program since the default behavior (keeping the classifier layer randomly initialized) can be frustrating for the user (I speak from experience ^^).\r\n\r\nTo fix this, we should probably fetch the number of labels of the saved model and use it to instanciate the model being created before loading the saved weights. Unfortunately I don't really know how to do that, any idea?\r\n\r\nAnother possible \"fix\" would be to force the user to give a `num_labels` argument when loading a pretrained classification model with the following code in `BertForSequenceClassification`:\r\n```python\r\n @classmethod\r\n def from_pretrained(cls, *args, **kwargs):\r\n if 'num_labels' not in kwargs:\r\n raise ValueError('num_labels should be given when loading a pre-trained classification model')\r\n return super().from_pretrained(*args, **kwargs)\r\n```\r\nAnd even with this code, we are not able to check that the `num_labels` value is the same as the saved model. I don't really like the idea of forcing the user to give an information that the checkpoint already contains.",
"Just use the num_labels when you load your model\r\n```python\r\nmodel_state_dict = torch.load(model_fn)\r\nloaded_model = BertForSequenceClassification.from_pretrained(bert_model, state_dict = model_state_dict, num_labels = 16)\r\nprint(loaded_model.num_labels)```\r\n",
"As mentioned in my previous posts, I think that the library should either fetch the number of labels from the save file or force the user to provide a `num_labels` argument. \r\n\r\nWhile what you are proposing fixes my problem I would like to prevent this problem for other users in the future by patching the library code.",
"I see thanks @rodgzilla. Indeed not using the `error_msg` is bad practice, let's raise these errors.\r\n\r\nRegarding fetching the number of labels, I understand your point but it will probably add too much custom logic in the library for the moment so let's go for your simple solution of setting the number of labels as mandatory for now (should have done that since the beginning).",
"Hi everyone!\r\nI had to come here to know that I had to include `num_labels` when loading the model because the error was misleading.\r\nAlso, I didn't know how many labels there were so I had to guess.\r\nThe model I was trying to load:\r\n[biobert-base-cased-v1.1-mnli](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1-mnli#)",
"I'm also facing a similar problem using the same model as @ugm2 - [biobert-base-cased-v1.1-mnli](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1-mnli#) \r\n\r\nIn my example I know the exact `num_labels` and provide it as an argument while loading the model. \r\nHow can I solve this?\r\n\r\n```\r\nRuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:\r\n\tsize mismatch for classifier.weight: copying a param with shape torch.Size([3, 768]) from checkpoint, the shape in current model is torch.Size([10, 768]).\r\n\tsize mismatch for classifier.bias: copying a param with shape torch.Size([3]) from checkpoint, the shape in current model is torch.Size([10]).\r\n```",
"With the latest transformers versions, you can use the recently introduced (https://github.com/huggingface/transformers/pull/12664) `ignore_mismatched_sizes=True` parameter for `from_pretrained` method in order to specify that you'd rather drop the layers that have incompatible shapes rather than raise a `RuntimeError`. "
] | 1,545 | 1,628 | 1,546 | CONTRIBUTOR | null | Hi!
There is a problem with the way model are saved and loaded. The following code should crash and doesn't:
```python
import torch
from pytorch_pretrained_bert import BertForSequenceClassification
model_fn = 'model.bin'
bert_model = 'bert-base-multilingual-cased'
model = BertForSequenceClassification.from_pretrained(bert_model, num_labels = 16)
model_to_save = model.module if hasattr(model, 'module') else model
torch.save(model_to_save.state_dict(), model_fn)
print(model_to_save.num_labels)
model_state_dict = torch.load(model_fn)
loaded_model = BertForSequenceClassification.from_pretrained(bert_model, state_dict = model_state_dict)
print(loaded_model.num_labels)
```
This code prints:
```
16
2
```
The code should raise an exception when trying to load the weights of the task specific linear layer. I'm guessing that the problem comes from `PreTrainedBertModel.from_pretrained`.
I would be happy to submit a PR fixing this problem but I'm not used to work with the PyTorch loading mechanisms. @thomwolf could you give me some guidance?
Cheers! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/135/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/134 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/134/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/134/comments | https://api.github.com/repos/huggingface/transformers/issues/134/events | https://github.com/huggingface/transformers/pull/134 | 393,020,564 | MDExOlB1bGxSZXF1ZXN0MjQwMTUwMTc1 | 134 | Fixing various class documentations. | {
"login": "rodgzilla",
"id": 12107203,
"node_id": "MDQ6VXNlcjEyMTA3MjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/12107203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rodgzilla",
"html_url": "https://github.com/rodgzilla",
"followers_url": "https://api.github.com/users/rodgzilla/followers",
"following_url": "https://api.github.com/users/rodgzilla/following{/other_user}",
"gists_url": "https://api.github.com/users/rodgzilla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rodgzilla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rodgzilla/subscriptions",
"organizations_url": "https://api.github.com/users/rodgzilla/orgs",
"repos_url": "https://api.github.com/users/rodgzilla/repos",
"events_url": "https://api.github.com/users/rodgzilla/events{/privacy}",
"received_events_url": "https://api.github.com/users/rodgzilla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Nice, thanks Gregory!"
] | 1,545 | 1,546 | 1,546 | CONTRIBUTOR | null | Hi!
The documentation of `PretrainedBertModel` was missing the new pre-trained model names and the one of `BertForQuestionAnswering` was wrong (due to a copy-pasting mistake I assume).
Cheers! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/134/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/134",
"html_url": "https://github.com/huggingface/transformers/pull/134",
"diff_url": "https://github.com/huggingface/transformers/pull/134.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/134.patch",
"merged_at": 1546859167000
} |
https://api.github.com/repos/huggingface/transformers/issues/133 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/133/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/133/comments | https://api.github.com/repos/huggingface/transformers/issues/133/events | https://github.com/huggingface/transformers/issues/133 | 392,922,322 | MDU6SXNzdWUzOTI5MjIzMjI= | 133 | lower accuracy on OMD(Obama-McCain Debate twitter sentiment dataset) | {
"login": "AIRobotZhang",
"id": 20748608,
"node_id": "MDQ6VXNlcjIwNzQ4NjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/20748608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AIRobotZhang",
"html_url": "https://github.com/AIRobotZhang",
"followers_url": "https://api.github.com/users/AIRobotZhang/followers",
"following_url": "https://api.github.com/users/AIRobotZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/AIRobotZhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AIRobotZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AIRobotZhang/subscriptions",
"organizations_url": "https://api.github.com/users/AIRobotZhang/orgs",
"repos_url": "https://api.github.com/users/AIRobotZhang/repos",
"events_url": "https://api.github.com/users/AIRobotZhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/AIRobotZhang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We need more informations on the parameters you use to run this training in order to understand what might be wrong.",
"> We need more informations on the parameters you use to run this training in order to understand what might be wrong.\r\n\r\nTHANK YOU! Because of limited mermory, the batch_size is 8 and epoch is 6, and the content is short, so set the max_length is 50, other parameters are default.",
"Try various values for the hyper-parameters and at least 10 different seed values.\r\nLimited memory should not be a limitation when you use `gradient accumulation` as indicated in the readme [here](https://github.com/huggingface/pytorch-pretrained-BERT#training-large-models-introduction-tools-and-examples) (see also how it is used in all the examples like `run_classifier`, `run_squad`...)"
] | 1,545 | 1,546 | 1,546 | NONE | null | I run the classification task with BERT pretrianed model, but while it's much lower than other methods on OMD dataset, which has 2 labels. The final accuracy result is only 62% on binary classification task! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/133/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/132 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/132/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/132/comments | https://api.github.com/repos/huggingface/transformers/issues/132/events | https://github.com/huggingface/transformers/issues/132 | 392,898,311 | MDU6SXNzdWUzOTI4OTgzMTE= | 132 | NONE | {
"login": "HuXiangkun",
"id": 6700036,
"node_id": "MDQ6VXNlcjY3MDAwMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6700036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HuXiangkun",
"html_url": "https://github.com/HuXiangkun",
"followers_url": "https://api.github.com/users/HuXiangkun/followers",
"following_url": "https://api.github.com/users/HuXiangkun/following{/other_user}",
"gists_url": "https://api.github.com/users/HuXiangkun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HuXiangkun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuXiangkun/subscriptions",
"organizations_url": "https://api.github.com/users/HuXiangkun/orgs",
"repos_url": "https://api.github.com/users/HuXiangkun/repos",
"events_url": "https://api.github.com/users/HuXiangkun/events{/privacy}",
"received_events_url": "https://api.github.com/users/HuXiangkun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,545 | 1,546 | 1,546 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/132/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/132/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/131 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/131/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/131/comments | https://api.github.com/repos/huggingface/transformers/issues/131/events | https://github.com/huggingface/transformers/issues/131 | 392,583,727 | MDU6SXNzdWUzOTI1ODM3Mjc= | 131 | bert-base-multilingual-cased, do lower case problem | {
"login": "itchanghi",
"id": 39073882,
"node_id": "MDQ6VXNlcjM5MDczODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/39073882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itchanghi",
"html_url": "https://github.com/itchanghi",
"followers_url": "https://api.github.com/users/itchanghi/followers",
"following_url": "https://api.github.com/users/itchanghi/following{/other_user}",
"gists_url": "https://api.github.com/users/itchanghi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itchanghi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itchanghi/subscriptions",
"organizations_url": "https://api.github.com/users/itchanghi/orgs",
"repos_url": "https://api.github.com/users/itchanghi/repos",
"events_url": "https://api.github.com/users/itchanghi/events{/privacy}",
"received_events_url": "https://api.github.com/users/itchanghi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @itchanghi, thanks for the feedback. Indeed the `run_squad` example was not updated for `cased` models. I fixed that in commits c9fd3505678d581388fb44ba1d79ac41e8fb28a4 and 2e4db64cab198dc241e18221ef088908f2587c61.\r\n\r\nPlease re-open the issue if your problem is not fixed (and maybe summarize it in an updated version).",
"It seems that default do_lower_case is still True."
] | 1,545 | 1,601 | 1,546 | NONE | null | I'm working on fine-tuning squad task with multilingual-cased model.
Google says "When using a cased model, make sure to pass --do_lower=False to the training scripts. (Or pass do_lower_case=False directly to FullTokenizer if you're using your own script.)"
So, I added "do_lower_case" argument to run squad script. However I got a some wired token converted result like this ['[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '[UNK]', '?'].
I think that there are two problem on example of run_squad.py.
1. default argument
```
parser.add_argument("--do_lower_case",
default=True,
action='store_true',
help="Whether to lower case the input text. True for uncased models, False for cased models.")
```
"--do_lower_case" 's default value is True also action value is 'store_true which means any case goes to args.do_lower_case value set True.
to be changed : default=True -> default=False
even though changing like above, tokenizer never know what happen ~
2. initialize Tokenizer
`tokenizer = BertTokenizer.from_pretrained(args.bert_model)`
When BertTokenizer's init method, do_lower_case is set True as a default.
```
def __init__(self, do_lower_case=True):
"""Constructs a BasicTokenizer.
Args:
do_lower_case: Whether to lower case the input.
"""
self.do_lower_case = do_lower_case
```
That's why calling classmethod from_pretrained with no additional argument, there is no way to change do_lower_case value.
```
@classmethod
def from_pretrained(cls, pretrained_model_name, cache_dir=None, *inputs, **kwargs):
"""
'''
'''
skip
'''
# Instantiate tokenizer.
tokenizer = cls(resolved_vocab_file, *inputs, **kwargs)
return tokenizer
```
to be changed : BertTokenizer.from_pretrained(args.bert_model, do_lower_case=False)
It is not possibly problem but some one can be suffered by this issue. many thanx to fix.
BTW, I Do not still understand... Why I got [UNK] tokens except English, punctuations and numbers.
The input text is Korean.
When reading data, the do_lower_case flag do work only call "token.lower()" and "_run_strip_accent(text)" or not.
When do_lower_case value is false, tokenizer work fine. I got a result as expected. This time tokens are not through "token.lower()" and "_run_strip_accent(text)" methods.
Even If set do_lower_case value to true then "token.lower()" and "_run_strip_accent(text)" methods are called, there is no difference. because I debug in _run_strip_accent method and input string value and return string value two are same.
```
def _run_strip_accents(self, text):
"""Strips accents from a piece of text."""
text = unicodedata.normalize("NFD", text)
output = []
for char in text:
cat = unicodedata.category(char)
if cat == "Mn":
continue
output.append(char)
return "".join(output)
```
input string is just splited and checked if there are accent characters or not.
but Korean doesn't have accent characters. So, joining output list is completely restoring input text value.
any advice ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/131/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/130 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/130/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/130/comments | https://api.github.com/repos/huggingface/transformers/issues/130/events | https://github.com/huggingface/transformers/pull/130 | 392,429,830 | MDExOlB1bGxSZXF1ZXN0MjM5NzAzOTA0 | 130 | Use entry-points instead of scripts | {
"login": "sodre",
"id": 1043285,
"node_id": "MDQ6VXNlcjEwNDMyODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1043285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sodre",
"html_url": "https://github.com/sodre",
"followers_url": "https://api.github.com/users/sodre/followers",
"following_url": "https://api.github.com/users/sodre/following{/other_user}",
"gists_url": "https://api.github.com/users/sodre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sodre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sodre/subscriptions",
"organizations_url": "https://api.github.com/users/sodre/orgs",
"repos_url": "https://api.github.com/users/sodre/repos",
"events_url": "https://api.github.com/users/sodre/events{/privacy}",
"received_events_url": "https://api.github.com/users/sodre/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks great indeed, thanks for that!"
] | 1,545 | 1,545 | 1,545 | CONTRIBUTOR | null | The recommended approach to create launch scripts is to use entry_points
and console_scripts.
xref:
https://packaging.python.org/guides/distributing-packages-using-setuptools/#scripts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/130/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/130",
"html_url": "https://github.com/huggingface/transformers/pull/130",
"diff_url": "https://github.com/huggingface/transformers/pull/130.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/130.patch",
"merged_at": 1545211105000
} |
https://api.github.com/repos/huggingface/transformers/issues/129 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/129/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/129/comments | https://api.github.com/repos/huggingface/transformers/issues/129/events | https://github.com/huggingface/transformers/issues/129 | 392,409,375 | MDU6SXNzdWUzOTI0MDkzNzU= | 129 | BERT + CNN classifier doesn't work after migrating from 0.1.2 to 0.4.0 | {
"login": "jwang-lp",
"id": 944876,
"node_id": "MDQ6VXNlcjk0NDg3Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/944876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jwang-lp",
"html_url": "https://github.com/jwang-lp",
"followers_url": "https://api.github.com/users/jwang-lp/followers",
"following_url": "https://api.github.com/users/jwang-lp/following{/other_user}",
"gists_url": "https://api.github.com/users/jwang-lp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jwang-lp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jwang-lp/subscriptions",
"organizations_url": "https://api.github.com/users/jwang-lp/orgs",
"repos_url": "https://api.github.com/users/jwang-lp/repos",
"events_url": "https://api.github.com/users/jwang-lp/events{/privacy}",
"received_events_url": "https://api.github.com/users/jwang-lp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't know...\r\nIf you can open-source a self contained example with data and code I can try to give it a deeper look.\r\nAre you using `apex`? That's the main change in 0.4.0.",
"Hi Thomas! I've found the problem. I think It's because you modified your `from_pretrained` function and I'm still using a part of the `from_pretrained` function from version 0.1.2, which resulted in some compatibility issues. Thanks!"
] | 1,545 | 1,545 | 1,545 | NONE | null | I used BERT in a very simple sentence classification task:
in `__init__` I have
```python3
self.bert = BertModel(config)
self.cnn_classifier = CNNClassifier(self.config.hidden_size, intent_cls_num)
```
and in forward it's just
```python3
encoded_layers, _ = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
confidence_score = self.cnn_classifier(encoded_layers)
masked_lm_loss = loss_fct(confidence_score, ground_truth_labels)
```
This code works perfectly when I use 0.1.2 version, but in 0.4.0, it:
- always predicting the most common class when have a large training set
- cannot even learn a dataset with only 4 samples (fed in as one batch); can learn a single sample though
Why are these problems happening in 0.4.0? The only change in my code is that I changed `weight_decay_rate` to `weight_decay`... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/129/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/128 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/128/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/128/comments | https://api.github.com/repos/huggingface/transformers/issues/128/events | https://github.com/huggingface/transformers/pull/128 | 392,406,933 | MDExOlB1bGxSZXF1ZXN0MjM5Njg3MTMy | 128 | Add license to source distribution | {
"login": "sodre",
"id": 1043285,
"node_id": "MDQ6VXNlcjEwNDMyODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1043285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sodre",
"html_url": "https://github.com/sodre",
"followers_url": "https://api.github.com/users/sodre/followers",
"following_url": "https://api.github.com/users/sodre/following{/other_user}",
"gists_url": "https://api.github.com/users/sodre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sodre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sodre/subscriptions",
"organizations_url": "https://api.github.com/users/sodre/orgs",
"repos_url": "https://api.github.com/users/sodre/repos",
"events_url": "https://api.github.com/users/sodre/events{/privacy}",
"received_events_url": "https://api.github.com/users/sodre/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,545 | 1,545 | 1,545 | CONTRIBUTOR | null | The `LICENSE` file in the git repository contains the Apache license text
but it not included the source `.tar.gz` distribution.
This PR adds a `MANIFEST.in` file with a directive to include the LICENSE. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/128/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/128",
"html_url": "https://github.com/huggingface/transformers/pull/128",
"diff_url": "https://github.com/huggingface/transformers/pull/128.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/128.patch",
"merged_at": 1545210953000
} |
https://api.github.com/repos/huggingface/transformers/issues/127 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/127/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/127/comments | https://api.github.com/repos/huggingface/transformers/issues/127/events | https://github.com/huggingface/transformers/pull/127 | 392,191,069 | MDExOlB1bGxSZXF1ZXN0MjM5NTE2NTc4 | 127 | raises value error for bert tokenizer for long sequences | {
"login": "patrick-s-h-lewis",
"id": 15031366,
"node_id": "MDQ6VXNlcjE1MDMxMzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/15031366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrick-s-h-lewis",
"html_url": "https://github.com/patrick-s-h-lewis",
"followers_url": "https://api.github.com/users/patrick-s-h-lewis/followers",
"following_url": "https://api.github.com/users/patrick-s-h-lewis/following{/other_user}",
"gists_url": "https://api.github.com/users/patrick-s-h-lewis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrick-s-h-lewis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrick-s-h-lewis/subscriptions",
"organizations_url": "https://api.github.com/users/patrick-s-h-lewis/orgs",
"repos_url": "https://api.github.com/users/patrick-s-h-lewis/repos",
"events_url": "https://api.github.com/users/patrick-s-h-lewis/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrick-s-h-lewis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @patrick-s-h-lewis, this is nice.\r\n\r\nThe max number of positional embeddings is also available in the pretrained models configuration files (as `max_position_embeddings`) but accessing this requires some change in the models stored on S3 (not storing them as tar.gz files) so I will take care of it in the next release."
] | 1,545 | 1,545 | 1,545 | CONTRIBUTOR | null | addesses #125
(all pre-trained bert models have a positional embedding matrix with 512 embeddings. Sequences longer than 512 tokens will cause indexing errors when you attempt to run a bert forward pass on them)
added a max_len arg to bert tokenizer. the function convert_tokens_to_indices will raise a value error if the inputted list of tokens is longer than max_len.
if no max_len is supplied, then no value error will be raised, however long the sequence is
Pre-trained bert models have max_len set to 512 at object construction time (in BertTokenizer.from_pretrained. It can be overridden by explicitly passing max_len to BertTokenizer.from_pretrained as a kwarg
if bert models with larger positional embedding matrices are released, it is possible to have different max_lens for different pretrained_models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/127/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/127",
"html_url": "https://github.com/huggingface/transformers/pull/127",
"diff_url": "https://github.com/huggingface/transformers/pull/127.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/127.patch",
"merged_at": 1545211758000
} |
https://api.github.com/repos/huggingface/transformers/issues/126 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/126/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/126/comments | https://api.github.com/repos/huggingface/transformers/issues/126/events | https://github.com/huggingface/transformers/issues/126 | 392,154,195 | MDU6SXNzdWUzOTIxNTQxOTU= | 126 | Benchmarking Prediction Speed | {
"login": "jaderabbit",
"id": 5547095,
"node_id": "MDQ6VXNlcjU1NDcwOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5547095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaderabbit",
"html_url": "https://github.com/jaderabbit",
"followers_url": "https://api.github.com/users/jaderabbit/followers",
"following_url": "https://api.github.com/users/jaderabbit/following{/other_user}",
"gists_url": "https://api.github.com/users/jaderabbit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaderabbit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaderabbit/subscriptions",
"organizations_url": "https://api.github.com/users/jaderabbit/orgs",
"repos_url": "https://api.github.com/users/jaderabbit/repos",
"events_url": "https://api.github.com/users/jaderabbit/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaderabbit/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Do you have a dataset in mind for the benchmark?\r\nWe can do a simple benchmark by timing the duration of evaluation on the SQuAD dev set for example.",
"Yes, that would be perfect! Ideally, it would exclude loading and setting up the model (something that the tf implementation literally does not allow for :P) ",
"Hi Jade,\r\n\r\nI did some benchmarking on a V100 GPU. You can check the script I used on the `benchmark` branch (mostly added timing to `run_squad`).\r\n\r\nHere are the results:\r\n![prediction_speed_bert_1](https://user-images.githubusercontent.com/7353373/50219266-f4deeb00-038e-11e9-9bcc-5077707b8b61.png)\r\n\r\nmax_seq_length | fp32 | fp16\r\n-- | -- | --\r\n384 | 140 | 352\r\n256 | 230 | 751\r\n128 | 488 | 1600\r\n64 | 1030 | 3663\r\n\r\nI will give a look on an older K80 (without fp16 support) when I have time.\r\n\r\n",
"This is fantastic! Thank you so so so so much! \r\n\r\nIf you get a chance to do the K80, that would be brilliant. I'll try run it when I get time. Currently doing a cost versus speed comparison just to get a feel. ",
"You can run it like this for `fp32` (just remove `--do_train`):\r\n```bash\r\npython run_squad.py \\\r\n --bert_model bert-base-uncased \\\r\n --do_predict \\\r\n --do_lower_case \\\r\n --train_file $SQUAD_DIR/train-v1.1.json \\\r\n --predict_file $SQUAD_DIR/dev-v1.1.json \\\r\n --predict_batch_size 128 \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2.0 \\\r\n --max_seq_length 384 \\\r\n --doc_stride 128 \\\r\n --output_dir /tmp/debug_squad/\r\n```\r\n\r\nAnd like this for `fp16` (add `--predict_fp16`):\r\n```bash\r\npython run_squad.py \\\r\n --bert_model bert-base-uncased \\\r\n --do_predict \\\r\n --predict_fp16 \\\r\n --do_lower_case \\\r\n --train_file $SQUAD_DIR/train-v1.1.json \\\r\n --predict_file $SQUAD_DIR/dev-v1.1.json \\\r\n --predict_batch_size 128 \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2.0 \\\r\n --max_seq_length 384 \\\r\n --doc_stride 128 \\\r\n --output_dir /tmp/debug_squad/\r\n```\r\n\r\nAdjust `predict_batch_size 128` to fill your GPU around 50% at least and adjust `--max_seq_length 384` to test with various sequence lengths. For small sequences (under 64 tokens) we should desactivate the windowing (related to `doc_stride`). I didn't take time to do that so the dataset reading didn't work (hence the absence of datapoint).",
"Fantastic. Tomorrow I'm going to run it for some smaller max sequence lengths (useful for my use case) and on some other GPUS: The Tesla M60 and then the K80 ",
"Managed to replicate your results on the V100. :) \r\n\r\nAlso, I've done the experiments below for sequences of length 64 on different GPUS. Will do the other sequence lengths when I get a chance. \r\n\r\n|GPU | max_seq_length | fp32 | fp16 |\r\n| -- | -- | -- | -- |\r\n| Tesla M60 | 64 | 210 | N/A |\r\n| Tesla K80 | 64 | 143 | N/A | \r\n \r\n",
"@thomwolf @jaderabbit Thank you for the experiments.\r\n\r\nI think these results deserves more visibility, maybe a dedicated markdown page or a section in the `README.md`?",
"Your are right Gregory.\r\nThe readme is starting to be too big in my opinion.\r\nI will try to setup a sphinx/ReadTheDocs online doc later this month (feel free to start a PR if you have experience in these kind of stuff).",
"I'm more or less new to sphinx but I would be happy to work on it with you.",
"Sure, if you want help that could definitely speed up the process.\r\n\r\nThe first step would be to create a new branch to work on with a `doc`folder and then generate the doc in the folder using sphinx.\r\n\r\nGood introductions to sphinx and readthedoc are here: http://www.ericholscher.com/blog/2016/jul/1/sphinx-and-rtd-for-writers/\r\nand here: https://docs.readthedocs.io/en/latest/intro/getting-started-with-sphinx.html\r\n\r\nWe will need to add some dependencies for the but we should strive to keep it as light as possible.\r\nHere is an example of repo I've worked on recently (still a draft but the doc is functional) https://github.com/huggingface/adversarialnlp",
"Hi @thomwolf ,\r\nI am looking to deploy a pre-trained squad-bert model to make predictions in real-time. \r\nRight now when I run: \r\n`python run_squad.py \\\r\n --bert_model bert-base-uncased \\\r\n --do_predict \\\r\n --do_lower_case \\\r\n --train_file $SQUAD_DIR/train-v1.1.json \\\r\n --predict_file $SQUAD_DIR/test.json \\\r\n --predict_batch_size 128 \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2.0 \\\r\n --max_seq_length 384 \\\r\n --doc_stride 128 \\\r\n --output_dir /tmp/debug_squad/`\r\nit takes 22 seconds to generate the prediction. Is there a way to reduce the amount off time taken to less than a second?\r\n\r\nThe \"test.json\" has one context and 1 question on the same. It looks like this:\r\n`{\r\n \"data\": [\r\n {\r\n \"title\": \"Arjun\",\r\n \"paragraphs\": [\r\n {\r\n \"context\": \"Arjun died in 1920. The American Football Club (AFC) celebrated this death. Arjun now haunts NFC. He used to love playing football. But nobody liked him.\",\r\n \"qas\": [\r\n {\r\n \"question\": \"When did Arjun die?\",\r\n \"id\": \"56be4db0acb8001400a502ed\"\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n}`\r\n\r\nPlease help me with this. I switched to using the PyTorch implementation hoping that getting a saved model and making predictions using the saved model will be easier in PyTorch. ",
"@apurvaasf Might be worth opening another ticket since that's slightly different to this. It shouldn't be too hard to write your own code for deployment. The trick is to make sure it does all the loading once, and just calls predict each time you need a prediction. \r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @thomwolf and thanks for the amazing implementation. I wonder what is the inference speed with a 512 batch size. It seems to take a lot of time to convert to GPU (1000msec for a batch size of 32) and I wonder if there is any quick speedup/fix. I am concerned with the latency rather than the throughput.",
"> Hi @thomwolf and thanks for the amazing implementation. I wonder what is the inference speed with a 512 batch size. It seems to take a lot of time to convert to GPU (1000msec for a batch size of 32) and I wonder if there is any quick speedup/fix. I am concerned with the latency rather than the throughput.\r\n\r\nHave you found any solutions? I've met the same problem.\r\nThe inference time is fast, but takes a lot of time to convert to GPU and convert the result to CPU for post-processing.",
"> > Hi @thomwolf and thanks for the amazing implementation. I wonder what is the inference speed with a 512 batch size. It seems to take a lot of time to convert to GPU (1000msec for a batch size of 32) and I wonder if there is any quick speedup/fix. I am concerned with the latency rather than the throughput.\r\n> \r\n> Have you found any solutions? I've met the same problem.\r\n> The inference time is fast, but takes a lot of time to convert to GPU and convert the result to CPU for post-processing.\r\n\r\n\r\n> albanD commented on 25 Mar\r\n> Hi,\r\n> \r\n> We use github issues only for bugs or feature requests.\r\n> Please use the forum to ask questions: https://discuss.pytorch.org/ as mentionned in the template you used.\r\n> \r\n> Note that in your case, you are most likely missing torch.cuda.syncrhonize() when timing your GPU code which makes the copy look much slower than it is because it has to wait for the rest of the work to be done.\r\n\r\n\r\n#Pytorch#35292"
] | 1,545 | 1,591 | 1,557 | CONTRIBUTOR | null | In reference to following [tweet](https://twitter.com/Thom_Wolf/status/1074983741716602882):
Would it be possible to do a benchmark on the speed of prediction? I was working with the tensorflow version of BERT, but it uses the new Estimators and I'm struggling to find a straight-forward way to benchmark it since it all gets hidden in layers of computation graph. I'd imagine pytorch being more forgiving in this regard. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/126/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/125 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/125/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/125/comments | https://api.github.com/repos/huggingface/transformers/issues/125/events | https://github.com/huggingface/transformers/issues/125 | 392,093,383 | MDU6SXNzdWUzOTIwOTMzODM= | 125 | Warning/Assert when embedding sequences longer than positional embedding size | {
"login": "patrick-s-h-lewis",
"id": 15031366,
"node_id": "MDQ6VXNlcjE1MDMxMzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/15031366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrick-s-h-lewis",
"html_url": "https://github.com/patrick-s-h-lewis",
"followers_url": "https://api.github.com/users/patrick-s-h-lewis/followers",
"following_url": "https://api.github.com/users/patrick-s-h-lewis/following{/other_user}",
"gists_url": "https://api.github.com/users/patrick-s-h-lewis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrick-s-h-lewis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrick-s-h-lewis/subscriptions",
"organizations_url": "https://api.github.com/users/patrick-s-h-lewis/orgs",
"repos_url": "https://api.github.com/users/patrick-s-h-lewis/repos",
"events_url": "https://api.github.com/users/patrick-s-h-lewis/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrick-s-h-lewis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could do that indeed Patrick.\r\nIn particular when the tokenizer is loaded from one of Google pre-trained model.\r\nIf you have a working implementation feel free to do a PR.\r\nOtherwise I will have a look at that when I start working on the next release.",
"Happy to do a PR :) will do today or tomorrow"
] | 1,545 | 1,546 | 1,546 | CONTRIBUTOR | null | Hi team, love the work.
Just a feature suggestion: when running on GPU (presumably the CPU too), BERT will break when you try to run on sentences longer than 512 tokens (on bert-base).
This is because the position embedding matrix size is only 512 (or whatever else it is for the other bert models)
Could the tokenizer have an assert/warning on it that doesn't allow you tokenize a sentence longer than the number of positional embeddings, so that you get a better error message than a bit scary (uncatchable) cuda error.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/125/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/124 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/124/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/124/comments | https://api.github.com/repos/huggingface/transformers/issues/124/events | https://github.com/huggingface/transformers/pull/124 | 392,073,659 | MDExOlB1bGxSZXF1ZXN0MjM5NDIzMzU5 | 124 | Add example for fine tuning BERT language model | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This looks like a great addition!\r\n\r\nIs it a full re-implementation of the pre-training script?",
"The implementation uses the same sampling parameters and logic, but it's not a one-by-one re-implementation of the original pre-training script. \r\n\r\n**Main differences:**\r\n- In the original repo they first create a training set of TFrecords from a raw corpus ([create_pretraining_data.py](https://github.com/google-research/bert/blob/master/create_pretraining_data.py)) and then perform model training using [run_pretraining.py](https://github.com/google-research/bert/blob/master/run_pretraining.py). We decided \r\nagainst this two step procedure and do the conversion from raw text to sample \"on the fly\" (more similar to [this repo from codertimo](https://github.com/codertimo/BERT-pytorch)). With this we can actually generate new samples every epoch.\r\n- We currently feed in pair of lines (= sentences) as one sample, while the [original repo](https://github.com/google-research/bert/blob/master/create_pretraining_data.py#L229) fills 90% of samples up with more sentences until max_seq_length is reached (for our use case this did not make any sense)\r\n\r\n**Main similarities:**\r\n- All sampling / masking probabilities and parameters\r\n- Format of raw corpus (one sentence per line & empty line as doc delimiter) \r\n- Sampling strategy: Random nextSentence must be from another document\r\n- The data reader [of codertimo](https://github.com/codertimo/BERT-pytorch) is similar to our code, but didn't really match the original method of sampling.\r\n\r\nHappy to clarify further details!",
"Hi @deepset-ai this is great and, just a suggestion, maybe if this makes it to the repo it would be great to include something in the README too about this functionality in this pull request?",
"Just added some basic documentation to the README. Happy to include more, if @thomwolf thinks that this makes sense.",
"Yes, I was going to ask you to add some information in the readme, it's great. The more is the better. If you can also add instructions on how to download a dataset for the training as in the other examples it would be perfect. If your dataset is private, do you have in mind another dataset that would let the users try your script easily? If not it's ok, don't worry.\r\n\r\nAnother thing is that the `fp16` logic has now been switched to NVIDIA's [apex module](https//github.com/nvidia/apex) and we have gotten rid of the `optimize_on_cpu` option (see the [relevant PR](#116) for more details). You can see the changes in the current examples like `run_squad.py`, it's actually a lot simpler since we don't have to manage parameters copy in the example and it's also faster. Do you think you could adapt the fp16 parts of your script similarly?",
"This is something I'd been working on as well, congrats on a nice implementation!\r\n\r\nOne question, though: I noticed you stripped out the code for evaluating on a test set, but when fine-tuning the LM on a smaller corpus, would it be worth keeping that in? Overfitting is much more of a risk in a smaller corpus.",
"> This is something I'd been working on as well, congrats on a nice implementation!\r\n> \r\n> One question, though: I noticed you stripped out the code for evaluating on a test set, but when fine-tuning the LM on a smaller corpus, would it be worth keeping that in? Overfitting is much more of a risk in a smaller corpus.\r\n\r\n@Rocketknight1, you are right that we will probably need some better evaluation here. Currently, I have the feeling though that the evaluation on down-stream tasks is more meaningful (see also Jacob Devlin's comment [here](https://github.com/google-research/bert/issues/95#issuecomment-437599265)). But in addition, some better monitoring of the loss during and after training would be nice. \r\n\r\nDo you already have something in place and would like to contribute on this? Otherwise, I will try to find some time during the upcoming holidays to add this.",
"> \r\n> \r\n> > This is something I'd been working on as well, congrats on a nice implementation!\r\n> > One question, though: I noticed you stripped out the code for evaluating on a test set, but when fine-tuning the LM on a smaller corpus, would it be worth keeping that in? Overfitting is much more of a risk in a smaller corpus.\r\n> \r\n> @Rocketknight1, you are right that we will probably need some better evaluation here. Currently, I have the feeling though that the evaluation on down-stream tasks is more meaningful (see also Jacob Devlin's comment [here](https://github.com/google-research/bert/issues/95#issuecomment-437599265)). But in addition, some better monitoring of the loss during and after training would be nice.\r\n> \r\n> Do you already have something in place and would like to contribute on this? Otherwise, I will try to find some time during the upcoming holidays to add this.\r\n\r\nI don't have any evaluation code either, unfortunately! It might be easier to just evaluate on the final classification task, so it's not really urgent. I'll experiment with LM fine-tuning when I'm back at work in January. If I get good benefits on classification tasks I'll see what effect early stopping based on validation loss has, and if that turns out to be useful too I can submit a PR for it?",
"Have you thought about extending the vocabulary after fine-tuning on custom dataset. This could be useful if the custom dataset has specific terms related to that domain. ",
"> Have you thought about extending the vocabulary after fine-tuning on custom dataset. This could be useful if the custom dataset has specific terms related to that domain.\r\n\r\nAdjusting the vocabulary before fine-tuning could be interesting, but you would need some smart approach to exchange \"less important\" tokens from the original byte pair vocab with \"important\" ones from your custom corpus (while maintaining the pre-trained embeddings for the rest of the vocab meaningful). \r\nWe don't work on this at the moment. Looking forward to a PR, if you have time to work on this. ",
"> > Have you thought about extending the vocabulary after fine-tuning on custom dataset. This could be useful if the custom dataset has specific terms related to that domain.\r\n> \r\n> Adjusting the vocabulary before fine-tuning could be interesting, but you would need some smart approach to exchange \"less important\" tokens from the original byte pair vocab with \"important\" ones from your custom corpus (while maintaining the pre-trained embeddings for the rest of the vocab meaningful).\r\n> We don't work on this at the moment. Looking forward to a PR, if you have time to work on this.\r\n\r\nYes I am working on it. The idea is to add more items to the pretrained vocabulary. Also will adjust the model layers: bert.embeddings.word_embeddings.weight, cls.predictions.decoder.weight with the mean weights and also update cls.predictions.bias with mean bias for additional vocabulary words.\r\n\r\nWill send out a PR once I test it.",
"Ok this looks very good, I am merging, thanks a lot @tholor!\r\n"
] | 1,545 | 1,597 | 1,546 | NONE | null | We are currently working on fine-tuning the language model on a new target corpus. This should improve the model, if the language style in your target corpus differs significantly from the one initially used for training BERT (Wiki + BookCorpus), but is still too small for training BERT from scratch. In our case, we apply this on a rather technical english corpus.
The sample script is loading a pre-trained BERT model and fine-tunes it as a language model (masked tokens & nextSentence) on your target corpus. The samples from the target corpus can either be fed to the model directly from memory or read from disk one-by-one.
Training the language model from scratch without loading a pre-trained BERT model is also not very difficult to do from here. In contrast, to the original tf repo, you can do the training with multi-GPU instead of TPU.
We thought this might be also helpful for others. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/124/reactions",
"total_count": 19,
"+1": 16,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/124/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/124",
"html_url": "https://github.com/huggingface/transformers/pull/124",
"diff_url": "https://github.com/huggingface/transformers/pull/124.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/124.patch",
"merged_at": 1546859031000
} |
https://api.github.com/repos/huggingface/transformers/issues/123 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/123/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/123/comments | https://api.github.com/repos/huggingface/transformers/issues/123/events | https://github.com/huggingface/transformers/issues/123 | 391,979,075 | MDU6SXNzdWUzOTE5NzkwNzU= | 123 | big memory occupied | {
"login": "AIRobotZhang",
"id": 20748608,
"node_id": "MDQ6VXNlcjIwNzQ4NjA4",
"avatar_url": "https://avatars.githubusercontent.com/u/20748608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AIRobotZhang",
"html_url": "https://github.com/AIRobotZhang",
"followers_url": "https://api.github.com/users/AIRobotZhang/followers",
"following_url": "https://api.github.com/users/AIRobotZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/AIRobotZhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AIRobotZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AIRobotZhang/subscriptions",
"organizations_url": "https://api.github.com/users/AIRobotZhang/orgs",
"repos_url": "https://api.github.com/users/AIRobotZhang/repos",
"events_url": "https://api.github.com/users/AIRobotZhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/AIRobotZhang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should lower the batch size probably"
] | 1,545 | 1,545 | 1,545 | NONE | null | When I run the examples for MRPC, my program was always killed becaused of big memory occupied. Anyone encounter with this issue? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/123/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/122 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/122/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/122/comments | https://api.github.com/repos/huggingface/transformers/issues/122/events | https://github.com/huggingface/transformers/issues/122 | 391,564,653 | MDU6SXNzdWUzOTE1NjQ2NTM= | 122 | _load_from_state_dict() takes 7 positional arguments but 8 were given | {
"login": "guanlongtianzi",
"id": 10386366,
"node_id": "MDQ6VXNlcjEwMzg2MzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/10386366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guanlongtianzi",
"html_url": "https://github.com/guanlongtianzi",
"followers_url": "https://api.github.com/users/guanlongtianzi/followers",
"following_url": "https://api.github.com/users/guanlongtianzi/following{/other_user}",
"gists_url": "https://api.github.com/users/guanlongtianzi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guanlongtianzi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guanlongtianzi/subscriptions",
"organizations_url": "https://api.github.com/users/guanlongtianzi/orgs",
"repos_url": "https://api.github.com/users/guanlongtianzi/repos",
"events_url": "https://api.github.com/users/guanlongtianzi/events{/privacy}",
"received_events_url": "https://api.github.com/users/guanlongtianzi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Full log of the error?",
"This is caused by pytorch version.\r\nI found , In 0.4.0 version, _load_from_state_dict() only take 7 arguments, but In 0.4.1 and this code, we need feed 8 arguments.\r\n\r\n```\r\nmodule._load_from_state_dict(\r\n state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs)\r\n```\r\n\r\nlocal_metadata should be removed in pytorch 0.4.0\r\n",
"Ok thanks @SummmerSnow !"
] | 1,545 | 1,546 | 1,546 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/122/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/121 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/121/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/121/comments | https://api.github.com/repos/huggingface/transformers/issues/121/events | https://github.com/huggingface/transformers/issues/121 | 391,458,997 | MDU6SXNzdWUzOTE0NTg5OTc= | 121 | High accuracy for CoLA task | {
"login": "pfecht",
"id": 26819398,
"node_id": "MDQ6VXNlcjI2ODE5Mzk4",
"avatar_url": "https://avatars.githubusercontent.com/u/26819398?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pfecht",
"html_url": "https://github.com/pfecht",
"followers_url": "https://api.github.com/users/pfecht/followers",
"following_url": "https://api.github.com/users/pfecht/following{/other_user}",
"gists_url": "https://api.github.com/users/pfecht/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pfecht/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pfecht/subscriptions",
"organizations_url": "https://api.github.com/users/pfecht/orgs",
"repos_url": "https://api.github.com/users/pfecht/repos",
"events_url": "https://api.github.com/users/pfecht/events{/privacy}",
"received_events_url": "https://api.github.com/users/pfecht/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The metric used for evaluation of CoLA in the GLUE benchmark is not accuracy but the https://en.wikipedia.org/wiki/Matthews_correlation_coefficient (see https://gluebenchmark.com/tasks).\r\nIndeed authors report in https://arxiv.org/abs/1810.04805 0.521 for Matthews correlation with BERT-base.",
"Makes sense, looks like I missed that point. Thank you."
] | 1,544 | 1,545 | 1,545 | NONE | null | I try to reproduce the CoLA results from the BERT paper (BERTBase, Single GPU).
Running the following command
```
python run_classifier.py \
--task_name cola \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/CoLA/ \
--bert_model bert-base-uncased \
--max_seq_length 128 \
--train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir $OUT_DIR/cola_output/
```
I get eval results of
```
12/16/2018 12:31:34 - INFO - __main__ - ***** Eval results *****
12/16/2018 12:31:34 - INFO - __main__ - eval_accuracy = 0.8302972195589645
12/16/2018 12:31:34 - INFO - __main__ - eval_loss = 0.5117322660925734
12/16/2018 12:31:34 - INFO - __main__ - global_step = 804
12/16/2018 12:31:34 - INFO - __main__ - loss = 0.17348005173644468
```
An accuracy of 0.83 would be fantastic, but compared to the 0.521 stated in the paper this doesn't seem very realistic.
Any suggestions what I'm doing wrong?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/121/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/120 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/120/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/120/comments | https://api.github.com/repos/huggingface/transformers/issues/120/events | https://github.com/huggingface/transformers/issues/120 | 391,402,013 | MDU6SXNzdWUzOTE0MDIwMTM= | 120 | RuntimeError: Expected object of type torch.LongTensor but found type torch.cuda.LongTensor for argument #3 'index' | {
"login": "danyaljj",
"id": 2441454,
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danyaljj",
"html_url": "https://github.com/danyaljj",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The issue was, not properly loading the model file and moving it to GPU. "
] | 1,544 | 1,544 | 1,544 | CONTRIBUTOR | null | I am using part of your evaluation code, with slight modifications:
https://github.com/danyaljj/pytorch-pretrained-BERT/blob/92e22d710287db1b4aa4fda951714887878fa728/examples/daniel_run.py#L582-L616
Wondering if you have encountered the following error:
```
(env3.6) khashab2@gissing:/shared/shelley/khashab2/pytorch-pretrained-BERT$ python3.6 examples/daniel_run.py
Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.
loaded the model to base . . .
loading the bert . . .
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1248501532/1248501532 [00:26<00:00, 46643749.96B/s]
Evaluating: 0%| | 0/1355 [00:00<?, ?it/s]
Traceback (most recent call last):
File "examples/daniel_run.py", line 817, in <module>
evaluate_model()
File "examples/daniel_run.py", line 606, in evaluate_model
batch_start_logits, batch_end_logits = model(input_ids, segment_ids, input_mask)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py", line 1096, in forward
sequence_output, _ = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py", line 626, in forward
embedding_output = self.embeddings(input_ids, token_type_ids)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py", line 193, in forward
words_embeddings = self.word_embeddings(input_ids)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 110, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/functional.py", line 1110, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of type torch.LongTensor but found type torch.cuda.LongTensor for argument #3 'index'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/120/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/119 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/119/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/119/comments | https://api.github.com/repos/huggingface/transformers/issues/119/events | https://github.com/huggingface/transformers/pull/119 | 391,231,432 | MDExOlB1bGxSZXF1ZXN0MjM4ODE1NDQx | 119 | Minor README fix | {
"login": "danyaljj",
"id": 2441454,
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danyaljj",
"html_url": "https://github.com/danyaljj",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed!"
] | 1,544 | 1,544 | 1,544 | CONTRIBUTOR | null | I think `optimize_on_cpu` option was dropped in #112 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/119/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/119",
"html_url": "https://github.com/huggingface/transformers/pull/119",
"diff_url": "https://github.com/huggingface/transformers/pull/119.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/119.patch",
"merged_at": 1544826588000
} |
https://api.github.com/repos/huggingface/transformers/issues/118 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/118/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/118/comments | https://api.github.com/repos/huggingface/transformers/issues/118/events | https://github.com/huggingface/transformers/issues/118 | 390,950,821 | MDU6SXNzdWUzOTA5NTA4MjE= | 118 | Segmentation fault (core dumped) | {
"login": "SummmerSnow",
"id": 22763522,
"node_id": "MDQ6VXNlcjIyNzYzNTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/22763522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SummmerSnow",
"html_url": "https://github.com/SummmerSnow",
"followers_url": "https://api.github.com/users/SummmerSnow/followers",
"following_url": "https://api.github.com/users/SummmerSnow/following{/other_user}",
"gists_url": "https://api.github.com/users/SummmerSnow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SummmerSnow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SummmerSnow/subscriptions",
"organizations_url": "https://api.github.com/users/SummmerSnow/orgs",
"repos_url": "https://api.github.com/users/SummmerSnow/repos",
"events_url": "https://api.github.com/users/SummmerSnow/events{/privacy}",
"received_events_url": "https://api.github.com/users/SummmerSnow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, you need to give me more information (a screen copy of a full log of the error).",
"Actually, this is all I got:\r\n\r\n>> python bert.py\r\n12/15/2018 19:43:06 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file /home/snow/bert_models_path/vocab.txt\r\n12/15/2018 19:43:06 - INFO - pytorch_pretrained_bert.modeling - loading archive file /home/snow/bert_models_path\r\n12/15/2018 19:43:06 - INFO - pytorch_pretrained_bert.modeling - Model config {\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"max_position_embeddings\": 512,\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 30522\r\n}\r\n\r\nbert_models_path load!!!\r\nSegmentation fault (core dumped)",
"There is no special c function in our package, it's all python code.\r\nMaybe you just don't have enough memory to load BERT?\r\nOr some dependency is not well installed like pytorch (or apex if you are using it).",
"Thanks for your advice.\r\nMaybe because of my pytorch version(0.4.0) I not not sure.\r\nI download the source code instead of pip install and using 0.4.1 version and run successfully.\r\n\r\nThanks for your code and advice again~",
"![εΎη](https://user-images.githubusercontent.com/26063832/57267682-8d160c00-70b3-11e9-9d59-5866c1321272.png)\r\nI also have this problem, and my torch version is 1.0.1 . I have tried to download the source code instead of pip install but also failed.",
"I have the same question @wyx518 ",
"Did you solve this ? @zhaohongjie ",
"@zhaohongjie did you solve this? I have the same question too.",
"Has someone solved this issue by any chance?\r\n",
"> Has someone solved this issue by any chance?\r\n\r\nDo you have the same problem? You could try to debug it by import transformers and torch only, then call torch.nn.CrossEntropyLoss() to see if it results in Segmentation fault. I accidentally fixed this error by install more packages",
"Hello,\r\n\r\nHad this error with CamembertForSequenceClassification.from_pretrained(), needed to update torch==1.5.1 and torchvision==0.6.1 ",
"I had the same issue while loading pretrained models.\r\nUpdated to the last version of Pytorch (1.5.1) and worked fine.",
"Yup, that worked guys! Thank you @Daugit and @gabrer ",
"Update by pip install torch==1.5.1 and the problem solved"
] | 1,544 | 1,600 | 1,545 | NONE | null | Hi,
I downloaded pretrained model and vocabulary fileοΌ and wanted to test BertModel to get hidden states.
when this
```encoded_layers, _ = model(tokens_tensor, segments_tensors)``` lines run, I got this error: Segmentation fault (core dumped).
I wonder what caused this error | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/118/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/117 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/117/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/117/comments | https://api.github.com/repos/huggingface/transformers/issues/117/events | https://github.com/huggingface/transformers/issues/117 | 390,793,183 | MDU6SXNzdWUzOTA3OTMxODM= | 117 | logging.basicConfig overrides user logging | {
"login": "asafamr",
"id": 5182534,
"node_id": "MDQ6VXNlcjUxODI1MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5182534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asafamr",
"html_url": "https://github.com/asafamr",
"followers_url": "https://api.github.com/users/asafamr/followers",
"following_url": "https://api.github.com/users/asafamr/following{/other_user}",
"gists_url": "https://api.github.com/users/asafamr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asafamr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asafamr/subscriptions",
"organizations_url": "https://api.github.com/users/asafamr/orgs",
"repos_url": "https://api.github.com/users/asafamr/repos",
"events_url": "https://api.github.com/users/asafamr/events{/privacy}",
"received_events_url": "https://api.github.com/users/asafamr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You're right. It's removed."
] | 1,544 | 1,544 | 1,544 | NONE | null | I think logging.basicConfig should not be called inside library code
check out this SO thread
https://stackoverflow.com/questions/27016870/how-should-logging-be-used-in-a-python-package | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/117/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/116 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/116/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/116/comments | https://api.github.com/repos/huggingface/transformers/issues/116/events | https://github.com/huggingface/transformers/pull/116 | 390,028,146 | MDExOlB1bGxSZXF1ZXN0MjM3ODg1MTEz | 116 | Change to use apex for better fp16 and multi-gpu support | {
"login": "FDecaYed",
"id": 17164548,
"node_id": "MDQ6VXNlcjE3MTY0NTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17164548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FDecaYed",
"html_url": "https://github.com/FDecaYed",
"followers_url": "https://api.github.com/users/FDecaYed/followers",
"following_url": "https://api.github.com/users/FDecaYed/following{/other_user}",
"gists_url": "https://api.github.com/users/FDecaYed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FDecaYed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FDecaYed/subscriptions",
"organizations_url": "https://api.github.com/users/FDecaYed/orgs",
"repos_url": "https://api.github.com/users/FDecaYed/repos",
"events_url": "https://api.github.com/users/FDecaYed/events{/privacy}",
"received_events_url": "https://api.github.com/users/FDecaYed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That's really awesome! I love the work you guys did on apex and I would be super happy to have an 'official' implementation of BERT using apex (plus it showcases all the major modules: FusedAdam, FusedLayerNorm, 16bits, distributed optimizer...). And the speed improvement is impressive, fine-tuning BERT-large on SQuAD in 1h is amazing!\r\n\r\nJust three general questions:\r\n1. could you reproduce the numerical results of the examples (SQuAD and MRPC) with this implementation?\r\n2. did you test distributed training?\r\n3. the main issue I see right now is the fact that apex is not on pypi and users have to manually install it. Now that pytorch-pretrained-bert is used as a dependency in downstream librairies like [AllenNLP](https://github.com/allenai/allennlp/blob/master/requirements.txt#L71) it's important to keep a smooth install process. Can you guys put apex on pypi? If not we should add some logic to handle the case when apex is not installed. It's ok for the examples (`run_classifier` and `run_squad`) which are not part of the package per se but the modifications in `modeling.py` needs to be taken care of.",
"Hi @thomwolf ,\r\n1. I have been able to reproduce numerical results of the examples. It shows some variance with different random seeds, especially with MRPC. But that should be somewhat expected and overall the results seems the same as baseline.\r\nFor example, I got `{\"exact_match\": 84.0491958372753, \"f1\": 90.94106705651285}` running SQuAD BERT-Large with default dynamic loss scaling and seed. I did not store other results since they should be very easy to re-run.\r\n2. I sanity checked distributed training results while developing. I'll run more results and post it here.\r\n3. Adding fallback to modeling.py should be easy since we can use BertLayerNorm in there. We just need to make sure it share the same interface. For example parameter names, in case user wants to build groups base on names. As for pypi, @mcarilli what's you thought?\r\n\r\n-Deyu",
"update:\r\n1. I have tested SQuAD BERT-Large with 4 V100 on a DGX station. Here is the result:\r\n```\r\ntraining time: 20:56\r\nspeedup over 1 V100: 3.2x\r\nevaluation result: {\"exact_match\": 83.6329233680227, \"f1\": 90.68315529756794}\r\n```\r\ncommand used:\r\n`python3 -m torch.distributed.launch --nproc_per_node=4 ./run_squad.py --bert_model bert-large-uncased --do_train --do_predict --do_lower_case --train_file $SQUAD_DIR/train-v1.1.json --predict_file $SQUAD_DIR/dev-v1.1.json --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /tmp/debug_squad/ --train_batch_size 6 --fp16`\r\n\r\n2. I modified `model.py` so it now will fallback to BertLayerNorm when apex is not installed.\r\nParameters `gamma, beta` are changed to `weight, bias`.\r\n\r\n-Deyu",
"Ok thanks for the update!\r\n\r\nIt looks good to me, I will do a few tests on various hardwares and it'll be included in the new 0.4.0 release coming out today (hopefully)\r\n\r\nCongrats on the MLPerf results by the way!",
"@FDecaYed I am trying to reproduce your numbers but I can't get very close. I am using an [Azure NDv2 server](https://azure.microsoft.com/en-us/blog/unlocking-innovation-with-the-new-n-series-azure-virtual-machines/) with 8 NVIDIA Tesla V100 NVLINK interconnected GPUs and 40 Intel Skylake cores.\r\n\r\nSwitching to fp16 lowers the memory usage by half indeed but the training time stays about the same ie around (e.g. 100 seconds for `run_classifier` on 1 GPU and about 50 minutes for the 2 epochs of your distributed training command on `run_squad`, with 4 GPUs in that case).\r\n\r\nI have the new release of PyTorch 1.0.0, CUDA 10 and installed apex with cpp/cuda extensions. I am using the fourth-release branch on the present repo which was rebased from master with your PR.\r\n\r\nIf you have any insight I would be interested. Could the difference come from using a DGX versus an Azure server? Can you give me the exact command you used to train the `run_classifier` example for instance?\r\n",
"there could be a lot of things, let's sort them out one by one:\r\nThe command I used for MRPC example is\r\n`CUDA_VISIBLE_DEVICES=0 python3 ./run_classifier.py --task_name MRPC --do_train --do_eval --do_lower_case --data_dir $GLUE_DIR/MRPC/ --bert_model bert-base-uncased --max_seq_length 128 --train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/mrpc_output/ --fp16`\r\nCUDA_VISIBLE_DEVICES is to make sure only one GPU is used. I noticed the code is using Dataparallel when there is only one process but more than 1 GPU in the box. `torch.nn.DataParallel` may not provide good speed on some cases. Are you running just one GPU on you 100 sec run? I reported time print by tqdm trange, is that the same number you are talking about here?\r\n\r\nFrom my past experience with cloud, single GPU number should not be that far from any DGX, unless you are bound by input. I doubt that's the case base on the workload. If we indeed are running and reporting the same thing, there must be some software differences. We are still in the progress moving up to pytorch 1.0, so my test was on 0.4. I'll merge your release branch and try on pytorch 1.0 on my side on DGX today.\r\n\r\nMeanwhile, this is the container I used for testing. You could try it on Azure and see if you can get my result. Note that it does not have latest apex installed, so you need uninstall apex and build latest inside.\r\nhttps://ngc.nvidia.com/catalog/containers/nvidia%2Fpytorch\r\n\r\n-Deyu",
"Thanks for the quick reply!\r\n\r\nThe timing I was reporting was the full timing for the training (3 iterations for the MRPC example).\r\nUsing your MRPC example command I get this example from training on a single V100: about 1 min 24 second of training, ie. around 84 seconds (~27 seconds per iteration).\r\nUsing static loss scale gives the same results.\r\n![image](https://user-images.githubusercontent.com/7353373/49967261-9e068b00-ff22-11e8-8ad3-b60bafbff0f2.png)\r\n\r\nAnd training without 16bits gives a total training time roughly similar: 1 min 31 seconds\r\n![image](https://user-images.githubusercontent.com/7353373/49967527-69470380-ff23-11e8-87eb-ecb344b5caea.png)\r\n\r\n",
"I tested on pytorch 1.0 and still getting the same speed up\r\n![screenshot from 2018-12-13 14-52-11](https://user-images.githubusercontent.com/17164548/49972649-28c99480-fee7-11e8-9dd7-1dabd8cdad65.png)\r\nI used the foruth-release branch and public dockerhub 1.0-cuda10.0-cudnn7-devel image here: \r\nhttps://hub.docker.com/r/pytorch/pytorch/tags/\r\nOnly modification I need was adding `encoding='utf-8'` reading csv.\r\nCould you run the same docker image and see if the speed is still the same? If so, could you do a quick profile with `nvprof -o bert-profile.nvvp` with just training 1 epoch and share the output? I don't have access to Azure now.\r\n",
"Ok, I got the 3-4x speed-up using the pytorch dockerhub 1.0-cuda10.0-cudnn7-devel image π₯\r\nThanks a lot for your help!\r\n\r\nI'm still wondering why I can't get these speedups outside of the docker container so I will try to investigate that a bit further (in particular since other people may start opening issues here :-).\r\n\r\nIf you have any further insight, don't hesitate to share :-)",
"Ok nailed it I think it was a question of not installing `cuda100` together with pytorch.\r\nEverything seems to work fine now!",
"Great! It'll be great if we can later update readme to document V100 expected speed as well.",
"Thanks for the nice work! @FDecaYed @thomwolf \r\n\r\nI tried fp16 training for bert-large. It has the imbalanced memory problem, which wastes gpu power a lot. The nvidia-smi results are shown as follows:\r\n\r\n```bash\r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n|===============================+======================+======================|\r\n| 0 Tesla V100-PCIE... Off | 0000A761:00:00.0 Off | 0 |\r\n| N/A 39C P0 124W / 250W | 15128MiB / 16130MiB | 99% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n| 1 Tesla V100-PCIE... Off | 0000C0BA:00:00.0 Off | 0 |\r\n| N/A 41C P0 116W / 250W | 10012MiB / 16130MiB | 95% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n| 2 Tesla V100-PCIE... Off | 0000D481:00:00.0 Off | 0 |\r\n| N/A 38C P0 80W / 250W | 10012MiB / 16130MiB | 91% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n| 3 Tesla V100-PCIE... Off | 0000EC9F:00:00.0 Off | 0 |\r\n| N/A 40C P0 61W / 250W | 10012MiB / 16130MiB | 95% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n\r\n+-----------------------------------------------------------------------------+\r\n| Processes: GPU Memory |\r\n| GPU PID Type Process name Usage |\r\n|=============================================================================|\r\n| 0 11870 C python 15117MiB |\r\n| 1 11870 C python 10001MiB |\r\n| 2 11870 C python 10001MiB |\r\n| 3 11870 C python 10001MiB |\r\n+-----------------------------------------------------------------------------+\r\n```",
"Is it already in add in pytorch-transformers? If so how do I use it, where should i specify the settings that I want to use Fp16 and apex and is apex already added in installation of pytorch transformers on anaconda 3?"
] | 1,544 | 1,563 | 1,544 | CONTRIBUTOR | null | Hi there,
This PR includes changes to improve FP16 and multi-gpu performance. We get over 3.5x performance increase on Tesla V100 across all examples.
NVIDIA Apex([https://github.com/NVIDIA/apex](url)) is added as a new dependency. It fixed issues with existing fp16 implementation(for example not converting loss/grad to float before scaling) as well as provide a more efficient implementation.
Below is test results we run on MRPC and SQuAD examples. All test baselines(`before` numbers) are fp32, since we found it actually is the best performing config. Reason being optimizer is forced on cpu under fp16.
The `after` numbers are running with `--fp16` after this PR. All tests done on single tesla V100 16GB.
MRPC on BERT-base:
```
before: 109 seconds, 9GB memory needed
after: 27 seconds, 5.5GB
speedup: 4x
```
SQuAD on BERT-base:
```
before: 90 minutes, 12.5GB
after: 24 minutes, 7.5GB
speedup: 3.75x
```
SQuAD on BERT-large:
```
before: 250 minutes, 15GB, with --train_batch_size 24 --gradient_accumulation_steps 6
after: 68 minutes, 14.5GB, with --train_batch_size 24 --gradient_accumulation_steps 3
speedup: 3.68x
```
`optimize_on_cpu` option is also removed entirely from code since I can't find any situation where it is faster than `gradient_accumulation_steps`. Of course assuming at least batch 1 can fit into GPU memory.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/116/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/116",
"html_url": "https://github.com/huggingface/transformers/pull/116",
"diff_url": "https://github.com/huggingface/transformers/pull/116.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/116.patch",
"merged_at": 1544700758000
} |
https://api.github.com/repos/huggingface/transformers/issues/115 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/115/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/115/comments | https://api.github.com/repos/huggingface/transformers/issues/115/events | https://github.com/huggingface/transformers/issues/115 | 389,950,888 | MDU6SXNzdWUzODk5NTA4ODg= | 115 | How to run a saved model? | {
"login": "wahlforss",
"id": 73305,
"node_id": "MDQ6VXNlcjczMzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/73305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wahlforss",
"html_url": "https://github.com/wahlforss",
"followers_url": "https://api.github.com/users/wahlforss/followers",
"following_url": "https://api.github.com/users/wahlforss/following{/other_user}",
"gists_url": "https://api.github.com/users/wahlforss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wahlforss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wahlforss/subscriptions",
"organizations_url": "https://api.github.com/users/wahlforss/orgs",
"repos_url": "https://api.github.com/users/wahlforss/repos",
"events_url": "https://api.github.com/users/wahlforss/events{/privacy}",
"received_events_url": "https://api.github.com/users/wahlforss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It looks like @thomwolf is planning to illustrate this in the examples soon.\r\nYou find some useful code to do what you want to do in https://github.com/huggingface/pytorch-pretrained-BERT/pull/112/",
"Hi this is now included in the new release 0.4.0 and there are examples on how you can save and reload the models in the updated run_classifier, run_squad and run_swag."
] | 1,544 | 1,544 | 1,544 | NONE | null | How can you run the model without training the model? If we already trained a model with run_classifer? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/115/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/114 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/114/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/114/comments | https://api.github.com/repos/huggingface/transformers/issues/114/events | https://github.com/huggingface/transformers/issues/114 | 389,846,897 | MDU6SXNzdWUzODk4NDY4OTc= | 114 | What is the best dataset structure for BERT? | {
"login": "wahlforss",
"id": 73305,
"node_id": "MDQ6VXNlcjczMzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/73305?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wahlforss",
"html_url": "https://github.com/wahlforss",
"followers_url": "https://api.github.com/users/wahlforss/followers",
"following_url": "https://api.github.com/users/wahlforss/following{/other_user}",
"gists_url": "https://api.github.com/users/wahlforss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wahlforss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wahlforss/subscriptions",
"organizations_url": "https://api.github.com/users/wahlforss/orgs",
"repos_url": "https://api.github.com/users/wahlforss/repos",
"events_url": "https://api.github.com/users/wahlforss/events{/privacy}",
"received_events_url": "https://api.github.com/users/wahlforss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,544 | 1,544 | 1,544 | NONE | null | First I want to say thanks for setting up all this!
I am using BertForSequenceClassification and am wondering what the optimal way is to structure my sequences.
Right now my sequences are blog post which could be upwards to 400 words long.
Would it be better to split my blog posts in sentences and use the sentences as my sequences instead?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/114/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/113 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/113/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/113/comments | https://api.github.com/repos/huggingface/transformers/issues/113/events | https://github.com/huggingface/transformers/pull/113 | 389,741,749 | MDExOlB1bGxSZXF1ZXN0MjM3NjYxMzY5 | 113 | fix compatibility with python 3.5.2 | {
"login": "hzhwcmhf",
"id": 1344510,
"node_id": "MDQ6VXNlcjEzNDQ1MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1344510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hzhwcmhf",
"html_url": "https://github.com/hzhwcmhf",
"followers_url": "https://api.github.com/users/hzhwcmhf/followers",
"following_url": "https://api.github.com/users/hzhwcmhf/following{/other_user}",
"gists_url": "https://api.github.com/users/hzhwcmhf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hzhwcmhf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hzhwcmhf/subscriptions",
"organizations_url": "https://api.github.com/users/hzhwcmhf/orgs",
"repos_url": "https://api.github.com/users/hzhwcmhf/repos",
"events_url": "https://api.github.com/users/hzhwcmhf/events{/privacy}",
"received_events_url": "https://api.github.com/users/hzhwcmhf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks, it could be nice to keep Python 3.5 compatibility indeed (see #110) but I think this will break (at least) the other examples (`run_squad` and `run_classifier`) which uses the Pathlib syntax `PATH / 'string'`.",
"I'm sorry for my previous stupid workaround, but now I modify some functions in ``file_utils.py``, just convert type for local variables. I think it won't affect the function behaviour.\r\nHowever, I only test the ``extract_features.py`` on python3.5. So I'm not sure it prefectly solve the problems, but it should be unharmful.",
"I think this will break [this line](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L484) at least.\r\n\r\nI we want to merge this PR you should check that the three examples and the tests are running (at least).",
"Did you see my second commit (485adde742)? I think I have fixed the problem you mentioned.\r\nNow I have tested all unittests and 3 examples, and they all work right on python 3.5.2.",
"Indeed, I missed this commit.\r\nOk, this solution makes sense, let's go for it!\r\nThanks!"
] | 1,544 | 1,544 | 1,544 | CONTRIBUTOR | null | When I run the following command on python 3.5.2
```
python3 extract_features.py --input_file input.txt --output_file output.txt --bert_model bert-base-uncased --do_lower_case
```
Get this error:
```
Traceback (most recent call last):
File "extract_features.py", line 298, in <module>
main()
File "extract_features.py", line 231, in main
tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case)
File "/home/huangfei/.local/lib/python3.5/site-packages/pytorch_pretrained_bert/tokenization.py", line 117, in from_pretrained
resolved_vocab_file = cached_path(vocab_file, cache_dir=cache_dir)
File "/home/huangfei/.local/lib/python3.5/site-packages/pytorch_pretrained_bert/file_utils.py", line 88, in cached_path
return get_from_cache(url_or_filename, cache_dir)
File "/home/huangfei/.local/lib/python3.5/site-packages/pytorch_pretrained_bert/file_utils.py", line 169, in get_from_cache
os.makedirs(cache_dir, exist_ok=True)
File "/usr/lib/python3.5/os.py", line 226, in makedirs
head, tail = path.split(name)
File "/usr/lib/python3.5/posixpath.py", line 103, in split
i = p.rfind(sep) + 1
AttributeError: 'PosixPath' object has no attribute 'rfind'
```
I find makedirs didn't support PosixPath in python3.5, so I make a change to fix this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/113/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/113",
"html_url": "https://github.com/huggingface/transformers/pull/113",
"diff_url": "https://github.com/huggingface/transformers/pull/113.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/113.patch",
"merged_at": 1544699715000
} |
https://api.github.com/repos/huggingface/transformers/issues/112 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/112/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/112/comments | https://api.github.com/repos/huggingface/transformers/issues/112/events | https://github.com/huggingface/transformers/pull/112 | 389,707,309 | MDExOlB1bGxSZXF1ZXN0MjM3NjM1MDky | 112 | Fourth release | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,544 | 1,545 | 1,544 | MEMBER | null | New:
- 3-4 times speed-up in fp16 thanks to NVIDIA's work on apex
- SWAG (multiple-choice) model added + example fine-tuning on SWAG
- bump up to PyTorch 1.0
- backward compatibility to python 3.5
- load fine-tuned model with `from_pretrained`
- add examples on how to save and load fine-tuned models | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/112/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/112",
"html_url": "https://github.com/huggingface/transformers/pull/112",
"diff_url": "https://github.com/huggingface/transformers/pull/112.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/112.patch",
"merged_at": 1544796947000
} |
https://api.github.com/repos/huggingface/transformers/issues/111 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/111/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/111/comments | https://api.github.com/repos/huggingface/transformers/issues/111/events | https://github.com/huggingface/transformers/pull/111 | 389,696,001 | MDExOlB1bGxSZXF1ZXN0MjM3NjI2NDQw | 111 | update: add from_state_dict for PreTrainedBertModel | {
"login": "friskit-china",
"id": 2494883,
"node_id": "MDQ6VXNlcjI0OTQ4ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2494883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/friskit-china",
"html_url": "https://github.com/friskit-china",
"followers_url": "https://api.github.com/users/friskit-china/followers",
"following_url": "https://api.github.com/users/friskit-china/following{/other_user}",
"gists_url": "https://api.github.com/users/friskit-china/gists{/gist_id}",
"starred_url": "https://api.github.com/users/friskit-china/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/friskit-china/subscriptions",
"organizations_url": "https://api.github.com/users/friskit-china/orgs",
"repos_url": "https://api.github.com/users/friskit-china/repos",
"events_url": "https://api.github.com/users/friskit-china/events{/privacy}",
"received_events_url": "https://api.github.com/users/friskit-china/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I like the idea but I not a big fan of all the code duplication I'ld rather fuse the two loading functions in one.\r\nBasically we can just add a `state_dict` argument to `from_pretrained` and add a check in `from_pretrained` to handle the case.",
"> Hi, I like the idea but I not a big fan of all the code duplication I'ld rather fuse the two loading functions in one.\r\n> Basically we can just add a `state_dict` argument to `from_pretrained` and add a check in `from_pretrained` to handle the case.\r\n\r\nHi, thanks for your reply, I just tried to make sure that my code will not incorporate bugs so I added a new function instead of changing code in `from_pretrained`.\r\n:)"
] | 1,544 | 1,544 | 1,544 | NONE | null | For restoring the training procedure.
Now we can use torch.save to store their model and restore their model by e.g. `model = BertForSequenceClassification.from_state_dict('bert-large-uncased', state_dict=torch.load('xx.pth'))` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/111/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/111",
"html_url": "https://github.com/huggingface/transformers/pull/111",
"diff_url": "https://github.com/huggingface/transformers/pull/111.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/111.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/110 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/110/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/110/comments | https://api.github.com/repos/huggingface/transformers/issues/110/events | https://github.com/huggingface/transformers/issues/110 | 389,549,868 | MDU6SXNzdWUzODk1NDk4Njg= | 110 | Pretrained Tokenizer Loading Fails: 'PosixPath' object has no attribute 'rfind' | {
"login": "decodyng",
"id": 5902855,
"node_id": "MDQ6VXNlcjU5MDI4NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5902855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/decodyng",
"html_url": "https://github.com/decodyng",
"followers_url": "https://api.github.com/users/decodyng/followers",
"following_url": "https://api.github.com/users/decodyng/following{/other_user}",
"gists_url": "https://api.github.com/users/decodyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/decodyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/decodyng/subscriptions",
"organizations_url": "https://api.github.com/users/decodyng/orgs",
"repos_url": "https://api.github.com/users/decodyng/repos",
"events_url": "https://api.github.com/users/decodyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/decodyng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh you are right, the file caching utilities requires python 3.6.\r\n\r\nI don't intend to maintain a lot of backward compatibility in terms of Python versions (I already surrendered maintaining a Python 2 version) so I will bump up the requirements to python 3.6.\r\n\r\nIf you are limited to python 3.5 and find a way around this, don't hesitate to share your solution with a PR though.",
"Ok @hzhwcmhf fixed this issue with #113 and we will be compatible with Python 3.5+ again in the coming release (today probably). Thanks @hzhwcmhf!"
] | 1,544 | 1,544 | 1,544 | NONE | null | I was trying to work through the toy tokenization example from the main README, and I hit an error on the step of loading in a pre-trained BERT tokenizer.
```
~/bert_transfer$ python3 test_tokenizer.py
Traceback (most recent call last):
File "test_tokenizer.py", line 10, in <module>
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
File "/usr/local/lib/python3.5/dist-packages/pytorch_pretrained_bert/tokenization.py", line 117, in from_pretrained
resolved_vocab_file = cached_path(vocab_file, cache_dir=cache_dir)
File "/usr/local/lib/python3.5/dist-packages/pytorch_pretrained_bert/file_utils.py", line 88, in cached_path
return get_from_cache(url_or_filename, cache_dir)
File "/usr/local/lib/python3.5/dist-packages/pytorch_pretrained_bert/file_utils.py", line 169, in get_from_cache
os.makedirs(cache_dir, exist_ok=True)
File "/usr/lib/python3.5/os.py", line 226, in makedirs
head, tail = path.split(name)
File "/usr/lib/python3.5/posixpath.py", line 103, in split
i = p.rfind(sep) + 1
AttributeError: 'PosixPath' object has no attribute 'rfind'
~/bert_transfer$ python3 --version
Python 3.5.2
```
Exact usage in script:
```
from pytorch_pretrained_bert import BertTokenizer
test_sentence = "When PyTorch first launched in early 2017, it quickly became a popular choice among AI researchers, who found it ideal for rapid experimentation due to its flexible, dynamic programming environment and user-friendly interface"
if __name__ == "__main__":
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
```
I am curious if you're able to replicate this error on python 3.5.2, since the repo states support for 3.5+. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/110/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/109 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/109/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/109/comments | https://api.github.com/repos/huggingface/transformers/issues/109/events | https://github.com/huggingface/transformers/issues/109 | 389,540,611 | MDU6SXNzdWUzODk1NDA2MTE= | 109 | UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte | {
"login": "ryonakamura",
"id": 9457467,
"node_id": "MDQ6VXNlcjk0NTc0Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9457467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryonakamura",
"html_url": "https://github.com/ryonakamura",
"followers_url": "https://api.github.com/users/ryonakamura/followers",
"following_url": "https://api.github.com/users/ryonakamura/following{/other_user}",
"gists_url": "https://api.github.com/users/ryonakamura/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryonakamura/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryonakamura/subscriptions",
"organizations_url": "https://api.github.com/users/ryonakamura/orgs",
"repos_url": "https://api.github.com/users/ryonakamura/repos",
"events_url": "https://api.github.com/users/ryonakamura/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryonakamura/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I had the same question/confusion! Thanks for clarifying it should be the path to the directory and not the filename itself. ",
"Great help, thanks.",
"Thanks"
] | 1,544 | 1,592 | 1,544 | NONE | null | When I convert a TensorFlow checkpoint in a `pytorch_model.bin` and run `run_classifier.py` with `--bert_model /path/to/pytorch_model.bin` option, following error occurs in `tokenization.py`.
```shell
12/10/2018 18:11:59 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file /Users/MAC/bert/model/uncased_L-12_H-768_A-12/pytorch_model.bin
Traceback (most recent call last):
File "examples/run_classifier.py", line 637, in <module>
main()
File "examples/run_classifier.py", line 480, in main
tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case)
File "/Users/MAC/.pyenv/versions/anaconda3-5.3.0/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 133, in from_pretrained
tokenizer = cls(resolved_vocab_file, *inputs, **kwargs)
File "/Users/MAC/.pyenv/versions/anaconda3-5.3.0/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 76, in __init__
self.vocab = load_vocab(vocab_file)
File "/Users/MAC/.pyenv/versions/anaconda3-5.3.0/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 51, in load_vocab
token = reader.readline()
File "/Users/MAC/.pyenv/versions/anaconda3-5.3.0/lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
```
I noticed that `--bert_model` option is not path to `pytorch_model.bin` file but path to directory containing `pytorch_model.bin` and `vocab.txt`. I close it.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/109/reactions",
"total_count": 7,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/109/timeline | completed | null | null |