url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/8321
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8321/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8321/comments
https://api.github.com/repos/huggingface/transformers/issues/8321/events
https://github.com/huggingface/transformers/issues/8321
736,854,753
MDU6SXNzdWU3MzY4NTQ3NTM=
8,321
tensorboard.compat.tensorflow_stub.errors.AlreadyExistsError: Directory already exists
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Well it seems to tell you that your tensorboard directory already exists? Try with a different directory?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
Hi I am running finetune_trainer.py on cloud with TPUs, here is the error, I appreciate your help thanks ```json { "textPayload": "Traceback (most recent call last):\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 330, in _mp_start_fn\n _start_fn(index, pf_cfg, fn, args)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 324, in _start_fn\n fn(gindex, *args)\n File \"/workdir/seq2seq/finetune_trainer.py\", line 303, in _mp_fn\n app.run(main, flags_parser=parse_flags)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py\", line 300, in run\n _run_main(main, args)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py\", line 251, in _run_main\n sys.exit(main(argv))\n File \"/workdir/seq2seq/finetune_trainer.py\", line 246, in main\n data_args=data_args,\n File \"/workdir/seq2seq/seq2seq_trainer.py\", line 37, in __init__\n super().__init__(*args, **kwargs)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py\", line 318, in __init__\n self.control = self.callback_handler.on_init_end(self.args, self.state, self.control)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_callback.py\", line 325, in on_init_end\n return self.call_event(\"on_init_end\", args, state, control)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer_callback.py\", line 376, in call_event\n **kwargs,\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/integrations.py\", line 213, in on_init_end\n self.tb_writer = SummaryWriter(log_dir=args.logging_dir)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/tensorboard/writer.py\", line 221, in __init__\n self._get_file_writer()\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/tensorboard/writer.py\", line 252, in _get_file_writer\n self.flush_secs, self.filename_suffix)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/tensorboard/writer.py\", line 62, in __init__\n log_dir, max_queue, flush_secs, filename_suffix)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/tensorboard/summary/writer/event_file_writer.py\", line 77, in __init__\n tf.io.gfile.makedirs(logdir)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/io/gfile.py\", line 673, in makedirs\n return get_filesystem(path).makedirs(path)\ntensorboard.compat.tensorflow_stub.errors.AlreadyExistsError: Directory already exists\n", "insertId": "5rl5rhwn8m5sobp9q", "resource": { "type": "k8s_container", "labels": { "container_name": "seq2seq", "location": "europe-west4-a", "project_id": "try-ideas-for-rmi", "namespace_name": "ruse-xgcp", "cluster_name": "xcloud-v3-donut-europe-west4-a", "pod_name": "20201105.df.e2753.0-7mxdc" } }, "timestamp": "2020-11-05T11:28:19.975413564Z", "severity": "ERROR", "labels": { "k8s-pod/job-name": "20201105.df.e2753.0", "k8s-pod/controller-uid": "4974cfc1-b3df-4882-9fd1-9095f4a944d9", "k8s-pod/jobowner": "ruse-xgcp", "k8s-pod/app": "xcloud", "k8s-pod/serviceName": "xc-20201105-df-e2753-0" }, "logName": "projects/try-ideas-for-rmi/logs/stderr", "receiveTimestamp": "2020-11-05T11:28:23.810886436Z" } ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8321/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8320
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8320/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8320/comments
https://api.github.com/repos/huggingface/transformers/issues/8320/events
https://github.com/huggingface/transformers/pull/8320
736,749,398
MDExOlB1bGxSZXF1ZXN0NTE1OTA4NzU0
8,320
Corrected tpu typo in examples readme
{ "login": "GuillemGSubies", "id": 37592763, "node_id": "MDQ6VXNlcjM3NTkyNzYz", "avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GuillemGSubies", "html_url": "https://github.com/GuillemGSubies", "followers_url": "https://api.github.com/users/GuillemGSubies/followers", "following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}", "gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}", "starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions", "organizations_url": "https://api.github.com/users/GuillemGSubies/orgs", "repos_url": "https://api.github.com/users/GuillemGSubies/repos", "events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}", "received_events_url": "https://api.github.com/users/GuillemGSubies/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Confirming the failure is spurious. Thanks a lot for your fix!" ]
1,604
1,604
1,604
CONTRIBUTOR
null
Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. documentation: @sgugger **EDIT**: Tests fail but that is definitely not due to my change
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8320/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8320", "html_url": "https://github.com/huggingface/transformers/pull/8320", "diff_url": "https://github.com/huggingface/transformers/pull/8320.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8320.patch", "merged_at": 1604580516000 }
https://api.github.com/repos/huggingface/transformers/issues/8319
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8319/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8319/comments
https://api.github.com/repos/huggingface/transformers/issues/8319/events
https://github.com/huggingface/transformers/issues/8319
736,716,181
MDU6SXNzdWU3MzY3MTYxODE=
8,319
Does Tokenizer provide parameters to split the number?
{ "login": "wulaoshi", "id": 27938964, "node_id": "MDQ6VXNlcjI3OTM4OTY0", "avatar_url": "https://avatars.githubusercontent.com/u/27938964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wulaoshi", "html_url": "https://github.com/wulaoshi", "followers_url": "https://api.github.com/users/wulaoshi/followers", "following_url": "https://api.github.com/users/wulaoshi/following{/other_user}", "gists_url": "https://api.github.com/users/wulaoshi/gists{/gist_id}", "starred_url": "https://api.github.com/users/wulaoshi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wulaoshi/subscriptions", "organizations_url": "https://api.github.com/users/wulaoshi/orgs", "repos_url": "https://api.github.com/users/wulaoshi/repos", "events_url": "https://api.github.com/users/wulaoshi/events{/privacy}", "received_events_url": "https://api.github.com/users/wulaoshi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I want to split "2004年06月25日" into [2, 0, 0, 5, 年, 0, 6, 月, 2, 5,日],not [2005, 年, 06, 月, 25, 日]. How can i do it the easiest? now I use these API : BertTokenizer BertTokenizerFast tokenizer.tokenize("2004年06月25日") Thanks. <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8319/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8318
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8318/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8318/comments
https://api.github.com/repos/huggingface/transformers/issues/8318/events
https://github.com/huggingface/transformers/pull/8318
736,647,162
MDExOlB1bGxSZXF1ZXN0NTE1ODI0MTAz
8,318
[s2s] test_bash_script.py - actually learn something
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It's ready to merge" ]
1,604
1,604
1,604
CONTRIBUTOR
null
As discussed in https://github.com/huggingface/transformers/issues/6049 this PR replaces the original `test_train_mbart_cc25_enro_script` test with mostly new guts doing the following: * [x] switches to downloading and caching a bigger custom dataset, which is a subset of the full wmt_en_ro. I created https://cdn-datasets.huggingface.co/translation/wmt_en_ro-tr40k-va0.5k-te0.5k.tar.gz - hope the name is intuitive - self-documenting. It's just 3.6M (vs 56M original). I made it using this script: https://github.com/stas00/porting/blob/master/transformers/translation/make-wmt_en_ro-subset.md * [x] performs qualitative checks on the eval/test results: - Minimum learning requirement 1: BLEU improves over the course of training by more than 2 pts - Minimum learning requirement 2: BLEU finishes above 17 - Minimum learning requirement 3: test bleu and val bleu within ~1 pt. also makes some extra improvements: * [x] uses `require_torch_gpu` decorator * [x] removes hardcoded paths Runtime is about ~2min on unoptimized rtx-3090. Fixes: #6049 @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8318/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8318", "html_url": "https://github.com/huggingface/transformers/pull/8318", "diff_url": "https://github.com/huggingface/transformers/pull/8318.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8318.patch", "merged_at": 1604636114000 }
https://api.github.com/repos/huggingface/transformers/issues/8317
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8317/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8317/comments
https://api.github.com/repos/huggingface/transformers/issues/8317/events
https://github.com/huggingface/transformers/issues/8317
736,599,507
MDU6SXNzdWU3MzY1OTk1MDc=
8,317
FutureWarning: This config doesn't use attention memories, a core feature of XLNet even though I'm using mem_len
{ "login": "zainsarwar865", "id": 66789976, "node_id": "MDQ6VXNlcjY2Nzg5OTc2", "avatar_url": "https://avatars.githubusercontent.com/u/66789976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zainsarwar865", "html_url": "https://github.com/zainsarwar865", "followers_url": "https://api.github.com/users/zainsarwar865/followers", "following_url": "https://api.github.com/users/zainsarwar865/following{/other_user}", "gists_url": "https://api.github.com/users/zainsarwar865/gists{/gist_id}", "starred_url": "https://api.github.com/users/zainsarwar865/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zainsarwar865/subscriptions", "organizations_url": "https://api.github.com/users/zainsarwar865/orgs", "repos_url": "https://api.github.com/users/zainsarwar865/repos", "events_url": "https://api.github.com/users/zainsarwar865/events{/privacy}", "received_events_url": "https://api.github.com/users/zainsarwar865/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[ { "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false } ]
[ "Any solutions to this?\r\n\r\nThanks!", "Hey, how are you setting `mem_len=384`? I am under the impression that the default `run_generation` script doesn't allow you to change the model from the default configuration, which would explain the warning.", "Here is how I am setting it\r\n`model = model_class.from_pretrained(args.model_name_or_path,cache_dir=args.cache_dir,mem_len=args.mem_len)`\r\n\r\n", "Ok, I've understood the issue - when using this call, the config first gets created with `mem_len=0` (hence the warning) then `mem_len` gets changed. I'll open a PR to move the warning to the `forward` call. In the meantime, you can suppress the warnings; memories are actually getting used.", "Okay great! \r\nThanks a lot. Closing the issue." ]
1,604
1,604
1,604
NONE
null
So every time I try to run xl-net using the run_generation.py script, I get the warning : `FutureWarning: This config doesn't use attention memories, a core feature of XLNet. Consider setting mem_len to a non-zero value, for example xlnet = XLNetLMHeadModel.from_pretrained('xlnet-base-cased'', mem_len=1024), for accurate training performance as well as an order of magnitude faster inference.` The issue is that I am using mem_len, I'm setting it to 384 (have tried other values as well) but I still get this warning.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8317/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8316
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8316/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8316/comments
https://api.github.com/repos/huggingface/transformers/issues/8316/events
https://github.com/huggingface/transformers/issues/8316
736,589,640
MDU6SXNzdWU3MzY1ODk2NDA=
8,316
No loss in model output for TFElectraForPreTraining
{ "login": "alibi123", "id": 51895254, "node_id": "MDQ6VXNlcjUxODk1MjU0", "avatar_url": "https://avatars.githubusercontent.com/u/51895254?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alibi123", "html_url": "https://github.com/alibi123", "followers_url": "https://api.github.com/users/alibi123/followers", "following_url": "https://api.github.com/users/alibi123/following{/other_user}", "gists_url": "https://api.github.com/users/alibi123/gists{/gist_id}", "starred_url": "https://api.github.com/users/alibi123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alibi123/subscriptions", "organizations_url": "https://api.github.com/users/alibi123/orgs", "repos_url": "https://api.github.com/users/alibi123/repos", "events_url": "https://api.github.com/users/alibi123/events{/privacy}", "received_events_url": "https://api.github.com/users/alibi123/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
No loss in model output for TFElectraForPreTraining, even though labels are provided. This is from docs for ELECTRA: `loss (optional, returned when labels is provided, tf.Tensor of shape (1,)) – Total loss of the ELECTRA objective.` The problem is that I want to get loss and logits, but model outputs only logits. My code snippet: ``` >>> import numpy as np >>> from transformers import ElectraTokenizer, TFElectraForPreTraining >>> >>> model = TFElectraForPreTraining.from_pretrained('google/electra-base-discriminator') All model checkpoint layers were used when initializing TFElectraForPreTraining. All the layers of TFElectraForPreTraining were initialized from the model checkpoint at google/electra-base-discriminator. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFElectraForPreTraining for predictions without further training. >>> tokenizer = ElectraTokenizer.from_pretrained('google/electra-base-discriminator') >>> inputs = tokenizer('simple phrase', return_tensors='tf') >>> labels = np.array([0] * 4) >>> out = model(inputs, labels=labels) >>> len(out) 1 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8316/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8315
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8315/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8315/comments
https://api.github.com/repos/huggingface/transformers/issues/8315/events
https://github.com/huggingface/transformers/pull/8315
736,577,451
MDExOlB1bGxSZXF1ZXN0NTE1NzY2MjE3
8,315
[s2s] test_distributed_eval
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Fyi the example tests don't currently run on the multi-gpu setup, as pretty much none of the examples tests were made to run on a multi-gpu setup.\r\n\r\nWe can look into adding them to the CI.", "> Fyi the example tests don't currently run on the multi-gpu setup, as pretty much none of the examples tests were made to run on a multi-gpu setup.\r\n> \r\n> We can look into adding them to the CI.\r\n\r\nAbsolutely!\r\n\r\nDo you prefer to start with a CI job that explicitly lists tests that were converted to run on multi-gpu (3 at the moment) or just run them all? It's all that `require_torch_mutigpu` plus ` that `require_torch_gpu` as it's flexible.", "> Do you prefer to start with a CI job that explicitly lists tests that were converted to run on multi-gpu (3 at the moment) or just run them all? It's all that require_torch_mutigpu plus that `require_torch_gpu` as it's flexible.\r\n\r\nI would send a PR with whatever you think makes most sense. Or wait until the correct answer is clearer. At this point you know pretty much as much about CI tradeoffs as I do :)", "I will prepare a new PR for github CIs.\r\n\r\notherwise I think this PR is complete." ]
1,604
1,604
1,604
CONTRIBUTOR
null
This PR: * [x] adds a helper function `get_gpu_count()`, which returns the number of available gpus (regardless of whether torch or tf is used) (otherwise we have to do all the `if _torch_available: import torch` rigmarole. * [x] adds a basic test for `run_distributed_eval.py` Things I wasn't sure about: - I put the new test into `test_seq2seq_examples_multi_gpu.py`, but it doesn't require multigpu. should I create a new test file `test_seq2seq_examples_gpu.py` or just leave it here for now and refactor once we have more tests like this? - I made it `@slow`, but wasn't 100% sure - it takes some 25s - probably too slow. Fixes: #8297 @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8315/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8315", "html_url": "https://github.com/huggingface/transformers/pull/8315", "diff_url": "https://github.com/huggingface/transformers/pull/8315.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8315.patch", "merged_at": 1604610076000 }
https://api.github.com/repos/huggingface/transformers/issues/8314
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8314/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8314/comments
https://api.github.com/repos/huggingface/transformers/issues/8314/events
https://github.com/huggingface/transformers/pull/8314
736,573,962
MDExOlB1bGxSZXF1ZXN0NTE1NzYzMzMz
8,314
[QA examples] fix inconsistent tokenization in _improve_answer_span
{ "login": "xiye17", "id": 43059752, "node_id": "MDQ6VXNlcjQzMDU5NzUy", "avatar_url": "https://avatars.githubusercontent.com/u/43059752?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xiye17", "html_url": "https://github.com/xiye17", "followers_url": "https://api.github.com/users/xiye17/followers", "following_url": "https://api.github.com/users/xiye17/following{/other_user}", "gists_url": "https://api.github.com/users/xiye17/gists{/gist_id}", "starred_url": "https://api.github.com/users/xiye17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiye17/subscriptions", "organizations_url": "https://api.github.com/users/xiye17/orgs", "repos_url": "https://api.github.com/users/xiye17/repos", "events_url": "https://api.github.com/users/xiye17/events{/privacy}", "received_events_url": "https://api.github.com/users/xiye17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik Not sure who I should ping to review this :) btw I am not sure what's the issue with the code quality here. ", "Hello, we're in the midst of deprecating that method and shifting the methods to `datasets` instead. Could you hold on until we have the updated approach, which should land sometimes within the next ~2 weeks?\r\n\r\nThis current method is not tested, so ensuring that a bug fix isn't creating all sorts of other bugs takes us a lot of time, so we're not going to do that here as this method will be deprecated very soon.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,619
1,619
CONTRIBUTOR
null
Basically, I am trying to fix the QA preprocessing steps when using BPE-based tokenizers (Roberta, Bart, Longformer) The prior _improve_answer_span has a bug because the way it tokenizes the answer_text is different from the way context is handled. So when the original context is '1987', and the old input ans_span is ['Ġ1987', ','], then the ',' won't be removed. To fix these cases I use two cleaned forms built by either including prefix_space or not. If a sub_answer_span matches either one of them we'll improve the answer_span. The first form [line 42] is used to handle things with leading punctuations, e.g., orig_answer_context=="73 million" and ans_span is ["Ġ$", "73", "Ġmillion"] (should be cleaned to ["73", "Ġmillion"] ). The second form is used to handle things with ending punctuations, e.g., the '1987' example mentioned above.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8314/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8314", "html_url": "https://github.com/huggingface/transformers/pull/8314", "diff_url": "https://github.com/huggingface/transformers/pull/8314.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8314.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8313
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8313/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8313/comments
https://api.github.com/repos/huggingface/transformers/issues/8313/events
https://github.com/huggingface/transformers/issues/8313
736,551,670
MDU6SXNzdWU3MzY1NTE2NzA=
8,313
OOMKilled with exit code 137 with finetune_trainer.py
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I even set batch_size to 1 and still the error is there. any suggestion? ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
Hi I am trying to train on WMT with TPUs using seq2seq_trainer.py, and I always get "OOMKilled with exit code 137" on the line when it tries to use xm.spawn to run the main function on multiple TPUs. I decreased the batch size as much as possible, still the error is there. Do you have an idea why this is happening? thank you. I appreciate your help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8313/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8312
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8312/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8312/comments
https://api.github.com/repos/huggingface/transformers/issues/8312/events
https://github.com/huggingface/transformers/pull/8312
736,517,415
MDExOlB1bGxSZXF1ZXN0NTE1NzE3OTMy
8,312
Create README.md
{ "login": "ktrapeznikov", "id": 4052002, "node_id": "MDQ6VXNlcjQwNTIwMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/4052002?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ktrapeznikov", "html_url": "https://github.com/ktrapeznikov", "followers_url": "https://api.github.com/users/ktrapeznikov/followers", "following_url": "https://api.github.com/users/ktrapeznikov/following{/other_user}", "gists_url": "https://api.github.com/users/ktrapeznikov/gists{/gist_id}", "starred_url": "https://api.github.com/users/ktrapeznikov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ktrapeznikov/subscriptions", "organizations_url": "https://api.github.com/users/ktrapeznikov/orgs", "repos_url": "https://api.github.com/users/ktrapeznikov/repos", "events_url": "https://api.github.com/users/ktrapeznikov/events{/privacy}", "received_events_url": "https://api.github.com/users/ktrapeznikov/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
model card for ktrapeznikov/gpt2-medium-topic-news
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8312/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8312", "html_url": "https://github.com/huggingface/transformers/pull/8312", "diff_url": "https://github.com/huggingface/transformers/pull/8312.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8312.patch", "merged_at": 1604661599000 }
https://api.github.com/repos/huggingface/transformers/issues/8311
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8311/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8311/comments
https://api.github.com/repos/huggingface/transformers/issues/8311/events
https://github.com/huggingface/transformers/issues/8311
736,501,527
MDU6SXNzdWU3MzY1MDE1Mjc=
8,311
error 'ascii' codec can't decode byte 0xc3 in position 6550: ordinal not in range(128)\n", when running finetune_trainer.py on multiple tpus
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
Hi I am running finetune_trainer.py on multiple tpus getting the following error, thank you for your help { "textPayload": "Traceback (most recent call last):\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 330, in _mp_start_fn\n _start_fn(index, pf_cfg, fn, args)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 324, in _start_fn\n fn(gindex, *args)\n File \"/workdir/seq2seq/finetune_trainer.py\", line 299, in _mp_fn\n app.run(main, flags_parser=parse_flags)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py\", line 300, in run\n _run_main(main, args)\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py\", line 251, in _run_main\n sys.exit(main(argv))\n File \"/workdir/seq2seq/finetune_trainer.py\", line 200, in main\n if training_args.do_train\n File \"/workdir/seq2seq/utils.py\", line 128, in __init__\n self.src_lens = self.get_char_lens(self.src_file)\n File \"/workdir/seq2seq/utils.py\", line 147, in get_char_lens\n return [len(x) for x in Path(data_file).open().readlines()]\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/encodings/ascii.py\", line 26, in decode\n return codecs.ascii_decode(input, self.errors)[0]\nUnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 6550: ordinal not in range(128)\n", "insertId": "5rl5rhwn8m5snsv0h", "resource": { "type": "k8s_container", "labels": { "namespace_name": "ruse-xgcp", "project_id": "try-ideas-for-rmi", "pod_name": "20201104.seq2seq.7685c.0-md6t7", "container_name": "seq2seq", "location": "europe-west4-a", "cluster_name": "xcloud-v3-donut-europe-west4-a" } }, "timestamp": "2020-11-04T23:53:22.599603005Z", "severity": "ERROR", "labels": { "k8s-pod/jobowner": "ruse-xgcp", "k8s-pod/serviceName": "xc-20201104-seq2seq-7685c-0", "k8s-pod/controller-uid": "32981a05-9c69-4234-b64e-6750f0afde11", "k8s-pod/app": "xcloud", "k8s-pod/job-name": "20201104.seq2seq.7685c.0" }, "logName": "projects/try-ideas-for-rmi/logs/stderr", "receiveTimestamp": "2020-11-04T23:53:27.836579865Z" }
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8311/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8310
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8310/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8310/comments
https://api.github.com/repos/huggingface/transformers/issues/8310/events
https://github.com/huggingface/transformers/issues/8310
736,479,364
MDU6SXNzdWU3MzY0NzkzNjQ=
8,310
Running seq2seq_trainer with iterable datasets
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Maybe @patil-suraj has an idea!", "Hi\r\nI really appreciate assisting me with this question, I am still struggling with it, could you tell me some pointers, how to add the handling capability of iterative datasets to finetune_trainer.py codes. thanks @patil-suraj ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
Hi Looking into https://github.com/huggingface/transformers/blob/master/examples/seq2seq/seq2seq_trainer.py line 117, if the dataset is iterative, then the code does not return a distributed sampler so in case one wants to train on TPUs on multiple cores with iterable datasets, I was wondering if you could assist me how I can use the codes for iterable datasets on TPUs thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8310/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8309
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8309/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8309/comments
https://api.github.com/repos/huggingface/transformers/issues/8309/events
https://github.com/huggingface/transformers/pull/8309
736,411,790
MDExOlB1bGxSZXF1ZXN0NTE1NjMwMjQ0
8,309
examples/docs: caveat that PL examples don't work on TPU
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
"Note that this approach does not work for examples that use `pytorch-lightning`."
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8309/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8309", "html_url": "https://github.com/huggingface/transformers/pull/8309", "diff_url": "https://github.com/huggingface/transformers/pull/8309.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8309.patch", "merged_at": 1604930123000 }
https://api.github.com/repos/huggingface/transformers/issues/8308
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8308/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8308/comments
https://api.github.com/repos/huggingface/transformers/issues/8308/events
https://github.com/huggingface/transformers/pull/8308
736,394,824
MDExOlB1bGxSZXF1ZXN0NTE1NjE2MDY1
8,308
Clean up data collators and datasets
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
COLLABORATOR
null
# What does this PR do? This PR cleans a bit the `DataCollatorForLanguageModeling` by: - making sure it keeps additional labels (such a next sentence labels) - use the `tokenizer.pad` function when possible, rewrite the function that was doing the padding when it's not (to handle padding on the left as well as the right side) - take advantage of the `special_tokens_mask` that the tokenizer can return, to avoid unnecessary conversions tensor -> list -> tensor again As a result it deprecates `DataCollatorForSOP` (which can be replaced by `DataCollatorForLanguageModeling`) and changes the elements of `TextDatasetForNextSentencePrediction` to work with `DataCollatorForLanguageModeling`. That dataset change renders `DataCollatorForNextSentencePrediction` unusable, so it's removed (breaking change, though this is relatively contained, users that were using it should just use `DataCollatorForLanguageModeling` instead). Tests are adapted to check `DataCollatorForLanguageModeling` does work for all those tasks. In passing, all text datasets get their deprecation warning as we are now encouraging users to move the datasets library.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8308/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8308", "html_url": "https://github.com/huggingface/transformers/pull/8308", "diff_url": "https://github.com/huggingface/transformers/pull/8308.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8308.patch", "merged_at": 1604528689000 }
https://api.github.com/repos/huggingface/transformers/issues/8307
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8307/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8307/comments
https://api.github.com/repos/huggingface/transformers/issues/8307/events
https://github.com/huggingface/transformers/issues/8307
736,364,546
MDU6SXNzdWU3MzYzNjQ1NDY=
8,307
run_mlm.py: error: argument
{ "login": "Shafi2016", "id": 56795978, "node_id": "MDQ6VXNlcjU2Nzk1OTc4", "avatar_url": "https://avatars.githubusercontent.com/u/56795978?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Shafi2016", "html_url": "https://github.com/Shafi2016", "followers_url": "https://api.github.com/users/Shafi2016/followers", "following_url": "https://api.github.com/users/Shafi2016/following{/other_user}", "gists_url": "https://api.github.com/users/Shafi2016/gists{/gist_id}", "starred_url": "https://api.github.com/users/Shafi2016/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shafi2016/subscriptions", "organizations_url": "https://api.github.com/users/Shafi2016/orgs", "repos_url": "https://api.github.com/users/Shafi2016/repos", "events_url": "https://api.github.com/users/Shafi2016/events{/privacy}", "received_events_url": "https://api.github.com/users/Shafi2016/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Note that the old script is still available [here](https://github.com/huggingface/transformers/blob/master/examples/contrib/legacy/run_language_modeling.py).\r\n\r\nThe new one expects slightly different arguments:\r\n- you should remove the `mlm` flag now, since the script has been split in several others\r\n- `train_data_file` is now `train_file`\r\n- there is no `model_type` argument anymore as this is not necessary when we have the model checkpoint\r\n- `block_size` has been renamed `max_seq_length`", "Thank you so much!!!\r\n\r\nYou saved my day ", "I am getting the following error while running the below code. Could anyone please help me with this?\r\n\r\n!python \"/content/transformers/examples/language-modeling/run_mlm.py\" \\\r\n--output_dir \"/content/drive/MyDrive/Bert_models/test-mlm\" \\\r\n--model_name_or_path \"/content/bert-base-uncased\" \\\r\n--do_train \\\r\n--do_eval \\\r\n--train_file \"/content/train.txt\" \\\r\n--validation_file \"/content/test.txt\" \\\r\n\r\n\r\nError:\r\n[INFO|modeling_utils.py:1022] 2020-12-26 10:18:28,002 >> loading weights file /content/bert-base-uncased/pytorch_model.bin\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py\", line 1035, in from_pretrained\r\n state_dict = torch.load(resolved_archive_file, map_location=\"cpu\")\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/serialization.py\", line 595, in load\r\n return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/serialization.py\", line 764, in _legacy_load\r\n magic_number = pickle_module.load(f, **pickle_load_args)\r\n_pickle.UnpicklingError: invalid load key, 'v'.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/content/transformers/examples/language-modeling/run_mlm.py\", line 420, in <module>\r\n main()\r\n File \"/content/transformers/examples/language-modeling/run_mlm.py\", line 264, in main\r\n cache_dir=model_args.cache_dir,\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_auto.py\", line 1092, in from_pretrained\r\n pretrained_model_name_or_path, *model_args, config=config, **kwargs\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py\", line 1038, in from_pretrained\r\n f\"Unable to load weights from pytorch checkpoint file for '{pretrained_model_name_or_path}' \"\r\nOSError: Unable to load weights from pytorch checkpoint file for '/content/bert-base-uncased' at '/content/bert-base-uncased/pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,619
1,619
NONE
null
I just found today that transformers/examples/language-modeling/run_language_modeling.py" has now been changed to transformers/examples/language-modeling. Everything fine for me when was using run_language_modeling.py. But with the new change, I am getting the error that, **run_mlm.py: error: argument --mlm_probability: expected one argument** I could not find the reason despite spending many hours on it. !python "/content/transformers/examples/language-modeling/run_mlm.py" \ --output_dir "/content/drive/My Drive/Ottawa_citit" \ --model_type roberta \ --model_name_or_path roberta-base \ --do_train \ --per_gpu_train_batch_size 16 \ --seed 42 \ --train_data_file "/content/input_text.txt" \ --block_size 512 \ --line_by_line \ --learning_rate 6e-4 \ --num_train_epochs 4 \ --save_total_limit 2 \ --run_name high_mlm_prob \ --save_steps 200 \ --mlm\ --weight_decay 0.01 \ --mlm_probability 0.15 Thanks!!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8307/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8306
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8306/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8306/comments
https://api.github.com/repos/huggingface/transformers/issues/8306/events
https://github.com/huggingface/transformers/issues/8306
736,359,262
MDU6SXNzdWU3MzYzNTkyNjI=
8,306
Tokenizers save_pretrained broken when defining vocab and merges file arguments (v3.1)
{ "login": "totogot", "id": 73946588, "node_id": "MDQ6VXNlcjczOTQ2NTg4", "avatar_url": "https://avatars.githubusercontent.com/u/73946588?v=4", "gravatar_id": "", "url": "https://api.github.com/users/totogot", "html_url": "https://github.com/totogot", "followers_url": "https://api.github.com/users/totogot/followers", "following_url": "https://api.github.com/users/totogot/following{/other_user}", "gists_url": "https://api.github.com/users/totogot/gists{/gist_id}", "starred_url": "https://api.github.com/users/totogot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/totogot/subscriptions", "organizations_url": "https://api.github.com/users/totogot/orgs", "repos_url": "https://api.github.com/users/totogot/repos", "events_url": "https://api.github.com/users/totogot/events{/privacy}", "received_events_url": "https://api.github.com/users/totogot/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I cannot reproduce the error on master with\r\n```\r\ntransformers 4.0.0rc1\r\ntokenizers 0.9.4\r\n```", "@thomwolf I think it must be specific to the particular version of transformers (v3.1)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,619
1,619
NONE
null
## Problem Information Since the upgrade to transformers v3, it appears that issues arise when defining a RoBERTa or BART tokenizer using the vocab_file and merges_file arguments. Previously defining the tokenizer, by supplying these two arguments resulted in a correctly configured tokenizer and enabled the tokenizer to be saved using the standard .save_pretrained() function. However, since the upgrade to v3, it appears that using such arguments causes the tokenizer to default to incorrect "init_kwargs". ## Replicating the problem ```py from transformers import RobertaTokenizer #load the tokenizer using the standard method - results in correct init_kwargs correct_tokenizer = RobertaTokenizer.from_pretrained("roberta-large") #save the tokenizer so the vocab and merges file can be read in from disk correct_tokenizer.save_pretrained("file_path/roberta_tokenizer") #load the saved vocab and merges files to create a new tokenizer with the same properties broken_tokenizer = RobertaTokenizer( vocab_file="file_path/roberta_tokenizer/vocab.json", merges_file = "file_path/roberta_tokenizer/merges.txt" ) #attempt to save the second tokenizer using the same function as above broken_tokenizer.save_pretrained("file_path/saved_folder") ``` In the above script, loading the tokenizer using the ".from_pretrained('roberta-large')" method results in a tokenizer with the correct properties. The tokenizer's init_kwargs are as expected, and in the form of a dictionary as follows: - merges_file: "file/path/...", - model_max_length: 512, - vocab_file: "file/path/...", This tokenizer can be saved using the ".save_pretrained()" function as intended. However, when you load a tokenizer, while defining the "vocab_file=vocab.json" and "merges_file=merges.txt", as is the case with "broken_tokenizer", it appears that the init_kwargs defaults to incorrect properties. Unlike before the init_kwargs now present a dictionary with no mention of model configurations, but instead includes the following: - bos_token: AddedToken(bos_token, lstrip=False, rstrip=False) - eos_token: AddedToken(eos_token, lstrip=False, rstrip=False) - sep_token: AddedToken(sep_token, lstrip=False, rstrip=False) - cls_token: AddedToken(cls_token, lstrip=False, rstrip=False) - unk_token: AddedToken(unk_token, lstrip=False, rstrip=False) - pad_token: AddedToken(pad_token, lstrip=False, rstrip=False) ## Error identification The error in saving appears to arise when you reach the following section of the "save_pretrained()" function within tokenization_utils_base: ```py tokenizer_config = copy.deepcopy(self.init_kwargs) if len(self.init_inputs) > 0: tokenizer_config["init_inputs"] = copy.deepcopy(self.init_inputs) for file_id in self.vocab_files_names.keys(): tokenizer_config.pop(file_id, None) with open(tokenizer_config_file, "w", encoding="utf-8") as f: f.write(json.dumps(tokenizer_config, ensure_ascii=False)) ``` In the scenario where the tokenizer has the correct kwargs, the vocab_file and merges_file are popped from "tokenizer_config", leaving a tokenizer config that can be saved in JSON format. However, where the incorrect kwargs are configured, as is the case with "broken_tokenizer" above, the resulting "tokenizer_config" contains AddedToken objects, and therefore results in the following error: **"TypeError: Object of type AddedToken is not JSON serializable"** ## Versioning Transformers = 3.1 Tokenizers = 0.8.1rc2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8306/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8305
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8305/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8305/comments
https://api.github.com/repos/huggingface/transformers/issues/8305/events
https://github.com/huggingface/transformers/issues/8305
736,319,567
MDU6SXNzdWU3MzYzMTk1Njc=
8,305
Resource exhausted when training in loop
{ "login": "emillykkejensen", "id": 8842355, "node_id": "MDQ6VXNlcjg4NDIzNTU=", "avatar_url": "https://avatars.githubusercontent.com/u/8842355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emillykkejensen", "html_url": "https://github.com/emillykkejensen", "followers_url": "https://api.github.com/users/emillykkejensen/followers", "following_url": "https://api.github.com/users/emillykkejensen/following{/other_user}", "gists_url": "https://api.github.com/users/emillykkejensen/gists{/gist_id}", "starred_url": "https://api.github.com/users/emillykkejensen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emillykkejensen/subscriptions", "organizations_url": "https://api.github.com/users/emillykkejensen/orgs", "repos_url": "https://api.github.com/users/emillykkejensen/repos", "events_url": "https://api.github.com/users/emillykkejensen/events{/privacy}", "received_events_url": "https://api.github.com/users/emillykkejensen/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
## Environment info - `transformers` version: 3.4.0 - Platform: Linux-4.15.0-109-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.5 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik , @thomwolf or @jplu maybe? ## Information Model I am using Bert: The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce To reproduce this error, one can use the script below. I have taken it from a blog post I made and altered it a bit - so it is a bit long. The main thing however, seems to be that if I load and fit a model several times it will at some point make my GPU run out of memory. Here I use kerastuner for hyperparam optimization and after 7 trials my GPU runs out of memory and thoughs the error below: ` (0) Resource exhausted: OOM when allocating tensor with shape[3200,3072] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[node BERT_MultiLabel_MultiClass/tf_bert_model/bert/encoder/layer_._11/intermediate/dense/Tensordot/MatMul (defined at myLib/python3.7/site-packages/transformers/modeling_tf_bert.py:327) ]] ` ### Reproducible script ``` ####################################### ### -------- Load libraries ------- ### # Load Huggingface transformers from transformers import TFBertModel, BertConfig, BertTokenizerFast # Then what you need from tensorflow.keras from tensorflow.keras.layers import Input, Dropout, Dense from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam from tensorflow.keras.callbacks import EarlyStopping from tensorflow.keras.initializers import TruncatedNormal from tensorflow.keras.losses import CategoricalCrossentropy from tensorflow.keras.metrics import CategoricalAccuracy from tensorflow.keras.utils import to_categorical # And pandas for data import + sklearn because you allways need sklearn import pandas as pd from sklearn.model_selection import train_test_split ####################################### ### --------- Import data --------- ### # Import data from csv data = pd.read_csv('dev/Fun with BERT/complaints.csv') # Select required columns data = data[['Consumer complaint narrative', 'Product', 'Issue']] # Remove a row if any of the three remaining columns are missing data = data.dropna() data = data.sample(n = 5000) # Remove rows, where the label is present only ones (can't be split) data = data.groupby('Issue').filter(lambda x : len(x) > 1) data = data.groupby('Product').filter(lambda x : len(x) > 1) # Set your model output as categorical and save in new label col data['Issue_label'] = pd.Categorical(data['Issue']) data['Product_label'] = pd.Categorical(data['Product']) # Transform your output to numeric data['Issue'] = data['Issue_label'].cat.codes data['Product'] = data['Product_label'].cat.codes # Split into train and test - stratify over Issue data, data_test = train_test_split(data, test_size = 0.2, stratify = data[['Issue']]) ####################################### ### --------- Setup BERT ---------- ### # Name of the BERT model to use model_name = 'bert-base-uncased' # Max length of tokens max_length = 100 # Load transformers config and set output_hidden_states to False config = BertConfig.from_pretrained(model_name) config.output_hidden_states = False # Load BERT tokenizer tokenizer = BertTokenizerFast.from_pretrained(pretrained_model_name_or_path = model_name, config = config) ####################################### ### ------- Build the model ------- ### def model_build(hp): # TF Keras documentation: https://www.tensorflow.org/api_docs/python/tf/keras/Model # Load the Transformers BERT model transformer_model = TFBertModel.from_pretrained(model_name, config = config) # Build your model input input_ids = Input(shape=(max_length,), name='input_ids', dtype='int32') # attention_mask = Input(shape=(max_length,), name='attention_mask', dtype='int32') # inputs = {'input_ids': input_ids, 'attention_mask': attention_mask} inputs = {'input_ids': input_ids} # Load the Transformers BERT model as a layer in a Keras model bert_model = transformer_model(inputs)[1] dropout = Dropout(hp.Float('hp_dropout', min_value=0, max_value=0.99, default = 0.2), name='pooled_output') pooled_output = dropout(bert_model, training=False) # Then build your model output issue = Dense(units=len(data.Issue_label.value_counts()), kernel_initializer=TruncatedNormal(stddev=hp.Float('hp_stddev_issue', min_value=0, max_value=1, default = 0.02)), name='issue')(pooled_output) product = Dense(units=len(data.Product_label.value_counts()), kernel_initializer=TruncatedNormal(stddev=hp.Float('hp_stddev_product', min_value=0, max_value=1, default = 0.02)), name='product')(pooled_output) outputs = {'issue': issue, 'product': product} # And combine it all in a model object model = Model(inputs=inputs, outputs=outputs, name='BERT_MultiLabel_MultiClass') # Set an optimizer optimizer = Adam( learning_rate=5e-05, epsilon=1e-08, decay=0.01, clipnorm=1.0) # Set loss and metrics loss = {'issue': CategoricalCrossentropy(from_logits = True), 'product': CategoricalCrossentropy(from_logits = True)} metric = {'issue': CategoricalAccuracy('accuracy'), 'product': CategoricalAccuracy('accuracy')} # Compile the model model.compile( optimizer = optimizer, loss = loss, metrics = metric) return model from kerastuner import HyperModel class ClsModel(HyperModel): def build(self, fitting_param = None): model = model_build() return model ####################################### ### ------- Train the model ------- ### # Ready output data for the model y_issue = to_categorical(data['Issue']) y_product = to_categorical(data['Product']) # Tokenize the input (takes some time) x = tokenizer( text=data['Consumer complaint narrative'].to_list(), add_special_tokens=True, max_length=max_length, truncation=True, padding=True, return_tensors='tf', return_token_type_ids = False, return_attention_mask = True, verbose = True) from kerastuner.tuners import BayesianOptimization tuner = BayesianOptimization( model_build, objective='loss', max_trials=50, executions_per_trial=5, project_name='modeltest') tuner.search( x={'input_ids': x['input_ids']}, y={'issue': y_issue, 'product': y_product}, epochs=5) ``` ### Full error log Or allmost - removed some 100 lines of `2020-11-04 16:53:37.6569xx: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size xxxxxxxx totalling xxxMiB ` ``` 2020-11-04 16:53:37.656906: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 2405376 totalling 2.29MiB 2020-11-04 16:53:37.656913: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 2457600 totalling 2.34MiB 2020-11-04 16:53:37.656919: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 2950912 totalling 2.81MiB 2020-11-04 16:53:37.656926: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 2967552 totalling 2.83MiB 2020-11-04 16:53:37.656933: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 3091200 totalling 2.95MiB 2020-11-04 16:53:37.656939: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 3104512 totalling 2.96MiB 2020-11-04 16:53:37.656946: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 3161088 totalling 3.01MiB 2020-11-04 16:53:37.656953: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 3555840 totalling 3.39MiB 2020-11-04 16:53:37.656960: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 3575808 totalling 3.41MiB 2020-11-04 16:53:37.656966: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 4012032 totalling 3.83MiB 2020-11-04 16:53:37.656973: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 18 Chunks of size 4110336 totalling 70.56MiB 2020-11-04 16:53:37.656980: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 4111104 totalling 3.92MiB 2020-11-04 16:53:37.656987: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 4207872 totalling 4.01MiB 2020-11-04 16:53:37.656994: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 4236544 totalling 4.04MiB 2020-11-04 16:53:37.657001: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 4245504 totalling 4.05MiB 2020-11-04 16:53:37.657008: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 4354816 totalling 4.15MiB 2020-11-04 16:53:37.657015: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 809 Chunks of size 9437184 totalling 7.11GiB 2020-11-04 16:53:37.657023: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 143 Chunks of size 9830400 totalling 1.31GiB 2020-11-04 16:53:37.657030: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 10106624 totalling 9.64MiB 2020-11-04 16:53:37.657037: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 10291200 totalling 9.81MiB 2020-11-04 16:53:37.657044: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 10386432 totalling 9.91MiB 2020-11-04 16:53:37.657051: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 10658560 totalling 10.16MiB 2020-11-04 16:53:37.657059: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 10850048 totalling 10.35MiB 2020-11-04 16:53:37.657066: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 10895872 totalling 10.39MiB 2020-11-04 16:53:37.657073: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 10896384 totalling 10.39MiB 2020-11-04 16:53:37.657080: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 11045120 totalling 10.53MiB 2020-11-04 16:53:37.657087: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 11188224 totalling 10.67MiB 2020-11-04 16:53:37.657093: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 11581440 totalling 11.04MiB 2020-11-04 16:53:37.657100: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 19 Chunks of size 11796480 totalling 213.75MiB 2020-11-04 16:53:37.657108: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 12137472 totalling 11.58MiB 2020-11-04 16:53:37.657115: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 12404736 totalling 11.83MiB 2020-11-04 16:53:37.657122: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 12939264 totalling 12.34MiB 2020-11-04 16:53:37.657129: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 2 Chunks of size 13547520 totalling 25.84MiB 2020-11-04 16:53:37.657136: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 13621248 totalling 12.99MiB 2020-11-04 16:53:37.657144: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 13785856 totalling 13.15MiB 2020-11-04 16:53:37.657151: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 13865984 totalling 13.22MiB 2020-11-04 16:53:37.657160: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 3 Chunks of size 14155776 totalling 40.50MiB 2020-11-04 16:53:37.657164: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 14304768 totalling 13.64MiB 2020-11-04 16:53:37.657168: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 14649600 totalling 13.97MiB 2020-11-04 16:53:37.657172: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 35 Chunks of size 15360000 totalling 512.70MiB 2020-11-04 16:53:37.657176: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 3 Chunks of size 15906816 totalling 45.51MiB 2020-11-04 16:53:37.657181: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 17 Chunks of size 16515072 totalling 267.75MiB 2020-11-04 16:53:37.657186: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 16575488 totalling 15.81MiB 2020-11-04 16:53:37.657194: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 17123328 totalling 16.33MiB 2020-11-04 16:53:37.657198: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 17658112 totalling 16.84MiB 2020-11-04 16:53:37.657202: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 18265088 totalling 17.42MiB 2020-11-04 16:53:37.657206: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 18265856 totalling 17.42MiB 2020-11-04 16:53:37.657211: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 31 Chunks of size 18266112 totalling 540.02MiB 2020-11-04 16:53:37.657215: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 2 Chunks of size 18266368 totalling 34.84MiB 2020-11-04 16:53:37.657219: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 18315776 totalling 17.47MiB 2020-11-04 16:53:37.657223: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 18501888 totalling 17.64MiB 2020-11-04 16:53:37.657229: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 18659328 totalling 17.79MiB 2020-11-04 16:53:37.657234: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 18659584 totalling 17.79MiB 2020-11-04 16:53:37.657240: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 24899584 totalling 23.75MiB 2020-11-04 16:53:37.657245: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 44 Chunks of size 39321600 totalling 1.61GiB 2020-11-04 16:53:37.657248: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 28 Chunks of size 93763584 totalling 2.44GiB 2020-11-04 16:53:37.657252: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 95564544 totalling 91.14MiB 2020-11-04 16:53:37.657256: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 99225600 totalling 94.63MiB 2020-11-04 16:53:37.657260: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 102592512 totalling 97.84MiB 2020-11-04 16:53:37.657264: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 103200768 totalling 98.42MiB 2020-11-04 16:53:37.657268: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 122075136 totalling 116.42MiB 2020-11-04 16:53:37.657273: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 143917056 totalling 137.25MiB 2020-11-04 16:53:37.657278: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 173402880 totalling 165.37MiB 2020-11-04 16:53:37.657283: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 174149120 totalling 166.08MiB 2020-11-04 16:53:37.657287: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 184383744 totalling 175.84MiB 2020-11-04 16:53:37.657291: I tensorflow/core/common_runtime/bfc_allocator.cc:1034] 1 Chunks of size 184633344 totalling 176.08MiB 2020-11-04 16:53:37.657295: I tensorflow/core/common_runtime/bfc_allocator.cc:1038] Sum Total of in-use chunks: 19.95GiB 2020-11-04 16:53:37.657299: I tensorflow/core/common_runtime/bfc_allocator.cc:1040] total_region_allocated_bytes_: 21468487680 memory_limit_: 21468487808 available bytes: 128 curr_region_allocation_byt es_: 42936975872 2020-11-04 16:53:37.657313: I tensorflow/core/common_runtime/bfc_allocator.cc:1046] Stats: Limit: 21468487808 InUse: 21422771968 MaxInUse: 21432602368 NumAllocs: 46783782MaxAllocSize: 187526912Reserved: 0 PeakReserved: 0 LargestFreeBlock: 0 2020-11-04 16:53:37.657683: W tensorflow/core/common_runtime/bfc_allocator.cc:439] **************************************************************************************************** 2020-11-04 16:53:37.657727: W tensorflow/core/framework/op_kernel.cc:1767] OP_REQUIRES failed at matmul_op.cc:481 : Resource exhausted: OOM when allocating tensor with shape[3200,3072] and type float o n /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc Traceback (most recent call last): File "<stdin>", line 4, in <module> File "myLib/python3.7/site-packages/kerastuner/engine/base_tuner.py", line 131, in search self.run_trial(trial, *fit_args, **fit_kwargs) File "myLib/python3.7/site-packages/kerastuner/engine/multi_execution_tuner.py", line 98, in run_trial history = model.fit(*fit_args, **copied_fit_kwargs) File "myLib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper return method(self, *args, **kwargs) File "myLib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 1098, in fit tmp_logs = train_function(iterator) File "myLib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__ result = self._call(*args, **kwds) File "myLib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 840, in _call return self._stateless_fn(*args, **kwds) File "myLib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2829, in __call__ return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access File "myLib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1848, in _filtered_call cancellation_manager=cancellation_manager) File "myLib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1924, in _call_flat ctx, args, cancellation_manager=cancellation_manager)) File "myLib/python3.7/site-packages/tensorflow/python/eager/function.py", line 550, in call ctx=ctx) File "myLib/python3.7/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute inputs, attrs, num_outputs) tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found. (0) Resource exhausted: OOM when allocating tensor with shape[3200,3072] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[node BERT_MultiLabel_MultiClass/tf_bert_model/bert/encoder/layer_._11/intermediate/dense/Tensordot/MatMul (defined at myLib/python3.7/site-packages/transformers/modeling_tf_bert.py:327) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. [[clip_by_norm_1/truediv/_546]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. (1) Resource exhausted: OOM when allocating tensor with shape[3200,3072] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[node BERT_MultiLabel_MultiClass/tf_bert_model/bert/encoder/layer_._11/intermediate/dense/Tensordot/MatMul (defined at myLib/python3.7/site-packages/transformers/modeling_tf_bert.py:327) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. 0 successful operations. 0 derived errors ignored. [Op:__inference_train_function_1394299] Function call stack: train_function -> train_function ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8305/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8304
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8304/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8304/comments
https://api.github.com/repos/huggingface/transformers/issues/8304/events
https://github.com/huggingface/transformers/issues/8304
736,316,064
MDU6SXNzdWU3MzYzMTYwNjQ=
8,304
Exception: process 0 terminated with exit code 17 when using xla_spawn
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! Usually this error is preceded by another message. The exception with exit code 17 is not the error in itself. Please paste the full error stack trace here, alongside the command you used to launch your script.", "Hi Lysandre, here is the full error I get on the google cloud it is\nterminated because of this exception\n\nException: process 0 terminated with exit code 17\n\nbut I do not see any other error messages, do you know what might be\ncausing this?\nthanks for your help.\nBest\nRabeeh\n\n\n\"Traceback (most recent call last): File\n\"/root/anaconda3/envs/pytorch/lib/python3.6/runpy.py\", line 193, in\n_run_module_as_main \"__main__\", mod_spec) File\n\"/root/anaconda3/envs/pytorch/lib/python3.6/runpy.py\", line 85, in\n_run_code exec(code, run_globals) File \"/workdir/seq2seq/xla_spawn.py\",\nline 79, in <module> app.run(main, flags_parser=parse_args) File\n\"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py\",\nline 300, in run _run_main(main, args) File\n\"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py\",\nline 251, in _run_main sys.exit(main(argv)) File\n\"/workdir/seq2seq/xla_spawn.py\", line 75, in main xmp.spawn(mod._mp_fn,\nargs=(), nprocs=xla_args.num_cores) File\n\"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\",\nline 395, in spawn start_method=start_method) File\n\"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py\",\nline 157, in start_processes while not context.join(): File\n\"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py\",\nline 112, in join (error_index, exitcode) Exception: process 0 terminated\nwith exit code 17\n\n\n\nOn Wed, Nov 4, 2020 at 7:45 PM Lysandre Debut <[email protected]>\nwrote:\n\n> Hi! Usually this error is preceded by another message. The exception with\n> exit code 17 is not the error in itself.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8304#issuecomment-721907293>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHH2HI6ICB6HB24S7CWDSOGOMNANCNFSM4TKLTIGA>\n> .\n>\n", "Hi\nthank you for the hint, now I got it, I can debug the rest, thanks for this.\nBest\nRabeeh\n\nOn Wed, Nov 4, 2020 at 7:50 PM Rabeeh Karimi Mahabadi <[email protected]>\nwrote:\n\n> Hi Lysandre, here is the full error I get on the google cloud it is\n> terminated because of this exception\n>\n> Exception: process 0 terminated with exit code 17\n>\n> but I do not see any other error messages, do you know what might be\n> causing this?\n> thanks for your help.\n> Best\n> Rabeeh\n>\n>\n> \"Traceback (most recent call last): File\n> \"/root/anaconda3/envs/pytorch/lib/python3.6/runpy.py\", line 193, in\n> _run_module_as_main \"__main__\", mod_spec) File\n> \"/root/anaconda3/envs/pytorch/lib/python3.6/runpy.py\", line 85, in\n> _run_code exec(code, run_globals) File \"/workdir/seq2seq/xla_spawn.py\",\n> line 79, in <module> app.run(main, flags_parser=parse_args) File\n> \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py\",\n> line 300, in run _run_main(main, args) File\n> \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py\",\n> line 251, in _run_main sys.exit(main(argv)) File\n> \"/workdir/seq2seq/xla_spawn.py\", line 75, in main xmp.spawn(mod._mp_fn,\n> args=(), nprocs=xla_args.num_cores) File\n> \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\",\n> line 395, in spawn start_method=start_method) File\n> \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py\",\n> line 157, in start_processes while not context.join(): File\n> \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py\",\n> line 112, in join (error_index, exitcode) Exception: process 0 terminated\n> with exit code 17\n>\n>\n>\n> On Wed, Nov 4, 2020 at 7:45 PM Lysandre Debut <[email protected]>\n> wrote:\n>\n>> Hi! Usually this error is preceded by another message. The exception with\n>> exit code 17 is not the error in itself.\n>>\n>> —\n>> You are receiving this because you authored the thread.\n>> Reply to this email directly, view it on GitHub\n>> <https://github.com/huggingface/transformers/issues/8304#issuecomment-721907293>,\n>> or unsubscribe\n>> <https://github.com/notifications/unsubscribe-auth/ARPXHH2HI6ICB6HB24S7CWDSOGOMNANCNFSM4TKLTIGA>\n>> .\n>>\n>\n" ]
1,604
1,608
1,608
NONE
null
Hi I am calling finetune_trainer with xla_spawn command, on TPU, I am getting this error on wmt, any idea on this? thanks ``` Traceback (most recent call last): File "/root/anaconda3/envs/pytorch/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/root/anaconda3/envs/pytorch/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/workdir/seq2seq/xla_spawn.py", line 113, in <module> app.run(main, flags_parser=parse_args) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py", line 300, in run _run_main(main, args) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py", line 251, in _run_main sys.exit(main(argv)) File "/workdir/seq2seq/xla_spawn.py", line 108, in main xmp.spawn(mod._mp_fn, args=(), nprocs=xla_args.num_cores) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 395, in spawn start_method=start_method) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes while not context.join(): File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 112, in join (error_index, exitcode) Exception: process 0 terminated with exit code 17 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8304/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8303
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8303/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8303/comments
https://api.github.com/repos/huggingface/transformers/issues/8303/events
https://github.com/huggingface/transformers/issues/8303
736,314,232
MDU6SXNzdWU3MzYzMTQyMzI=
8,303
How can we freeze the last few layers of a BERT model using tf 2.0(or higher)
{ "login": "soumya997", "id": 54326088, "node_id": "MDQ6VXNlcjU0MzI2MDg4", "avatar_url": "https://avatars.githubusercontent.com/u/54326088?v=4", "gravatar_id": "", "url": "https://api.github.com/users/soumya997", "html_url": "https://github.com/soumya997", "followers_url": "https://api.github.com/users/soumya997/followers", "following_url": "https://api.github.com/users/soumya997/following{/other_user}", "gists_url": "https://api.github.com/users/soumya997/gists{/gist_id}", "starred_url": "https://api.github.com/users/soumya997/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/soumya997/subscriptions", "organizations_url": "https://api.github.com/users/soumya997/orgs", "repos_url": "https://api.github.com/users/soumya997/repos", "events_url": "https://api.github.com/users/soumya997/events{/privacy}", "received_events_url": "https://api.github.com/users/soumya997/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
How can we freeze the last few layers of a BERT model using tf 2.0(or higher)? I just want to take the pre-trained layers and train the last few layers for fine-tuning on my own data in min time.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8303/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8302
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8302/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8302/comments
https://api.github.com/repos/huggingface/transformers/issues/8302/events
https://github.com/huggingface/transformers/pull/8302
736,288,375
MDExOlB1bGxSZXF1ZXN0NTE1NTI4Mzc2
8,302
Fix path to old run_language_modeling.py script
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8302/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8302", "html_url": "https://github.com/huggingface/transformers/pull/8302", "diff_url": "https://github.com/huggingface/transformers/pull/8302.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8302.patch", "merged_at": 1604513877000 }
https://api.github.com/repos/huggingface/transformers/issues/8301
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8301/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8301/comments
https://api.github.com/repos/huggingface/transformers/issues/8301/events
https://github.com/huggingface/transformers/pull/8301
736,260,309
MDExOlB1bGxSZXF1ZXN0NTE1NTA1NDE2
8,301
Speedup doc build
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
COLLABORATOR
null
# What does this PR do? This PR speeds the doc build by pinning the version of sphinx to 3.2.1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8301/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8301", "html_url": "https://github.com/huggingface/transformers/pull/8301", "diff_url": "https://github.com/huggingface/transformers/pull/8301.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8301.patch", "merged_at": 1604508681000 }
https://api.github.com/repos/huggingface/transformers/issues/8300
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8300/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8300/comments
https://api.github.com/repos/huggingface/transformers/issues/8300/events
https://github.com/huggingface/transformers/pull/8300
736,247,314
MDExOlB1bGxSZXF1ZXN0NTE1NDk0Njcz
8,300
adding model cards for distilled models
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "> I never know which tags are auto-generated, so please correct me if I did something useless!\r\n\r\nlooks good to me in terms of the tags 👍 " ]
1,604
1,604
1,604
MEMBER
null
# Model cards for distilled models As discussed on Slack, a bunch of model cards for distilled models (at least the ones I contributed to). cc @julien-c I never know which tags are auto-generated, so please correct me if I did something useless!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8300/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8300", "html_url": "https://github.com/huggingface/transformers/pull/8300", "diff_url": "https://github.com/huggingface/transformers/pull/8300.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8300.patch", "merged_at": 1604508106000 }
https://api.github.com/repos/huggingface/transformers/issues/8299
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8299/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8299/comments
https://api.github.com/repos/huggingface/transformers/issues/8299/events
https://github.com/huggingface/transformers/pull/8299
736,233,269
MDExOlB1bGxSZXF1ZXN0NTE1NDgyOTMy
8,299
Model card: T5-base fine-tuned on QASC
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Pretty cool!", "Thank you so much, @julien-c :) More models are coming ;)" ]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8299/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8299", "html_url": "https://github.com/huggingface/transformers/pull/8299", "diff_url": "https://github.com/huggingface/transformers/pull/8299.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8299.patch", "merged_at": 1604506815000 }
https://api.github.com/repos/huggingface/transformers/issues/8298
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8298/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8298/comments
https://api.github.com/repos/huggingface/transformers/issues/8298/events
https://github.com/huggingface/transformers/pull/8298
736,210,952
MDExOlB1bGxSZXF1ZXN0NTE1NDY0Mzkw
8,298
Fix validation file loading in scripts
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
COLLABORATOR
null
# What does this PR do? As pointed out in #8295, the validation file was not properly loaded in all the examples scripts (one typo copy-pasted several times). This PR fixes that. <!-- Remove if not applicable --> Fixes #8295
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8298/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8298", "html_url": "https://github.com/huggingface/transformers/pull/8298", "diff_url": "https://github.com/huggingface/transformers/pull/8298.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8298.patch", "merged_at": 1604504539000 }
https://api.github.com/repos/huggingface/transformers/issues/8297
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8297/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8297/comments
https://api.github.com/repos/huggingface/transformers/issues/8297/events
https://github.com/huggingface/transformers/issues/8297
736,197,456
MDU6SXNzdWU3MzYxOTc0NTY=
8,297
[s2s] 1 GPU test for run_distributed_eval
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649053, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted", "name": "Help wanted", "color": "008672", "default": false, "description": "Extra attention is needed, help appreciated" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "wdyt @stas00 ", "I will work on that, thank you.", "A minor correction to the command (corrected `data_dir`):\r\n```\r\npython -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --model_name Helsinki-NLP/opus-mt-en-ro --save_dir test_data/opus_wmt_en_ro_gens --data_dir test_data/wmt_en_ro\r\n```\r\n\r\nQuestion: why only 1 gpu? we currently don't have it tested at all.\r\n", "I thought 1 GPU test coverage would be runnable in current CI/by more users.\r\nBut if much easier to test 2 gpu/easy to add test for 2 GPU that is great!", "Bottom line - run with as many GPUs as available. \r\n\r\nThank you for clarifying.\r\n" ]
1,604
1,604
1,604
CONTRIBUTOR
null
Add test coverage for run_distributed_eval.py that can run on 1 GPU. The command: ```bash python -m torch.distributed.launch --nproc_per_node=1 run_distributed_eval.py --model_name Helsinki-NLP/opus-mt-en-ro --save_dir opus_wmt_en_ro_gens --data_dir wmt_en_ro ``` works on 1 GPU. After adding test coverage, we could try to improve API consistency between run_distributed_eval.py and run_eval.py .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8297/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8296
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8296/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8296/comments
https://api.github.com/repos/huggingface/transformers/issues/8296/events
https://github.com/huggingface/transformers/pull/8296
736,190,384
MDExOlB1bGxSZXF1ZXN0NTE1NDQ3MzU1
8,296
Update README.md
{ "login": "hassoudi", "id": 6810258, "node_id": "MDQ6VXNlcjY4MTAyNTg=", "avatar_url": "https://avatars.githubusercontent.com/u/6810258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hassoudi", "html_url": "https://github.com/hassoudi", "followers_url": "https://api.github.com/users/hassoudi/followers", "following_url": "https://api.github.com/users/hassoudi/following{/other_user}", "gists_url": "https://api.github.com/users/hassoudi/gists{/gist_id}", "starred_url": "https://api.github.com/users/hassoudi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hassoudi/subscriptions", "organizations_url": "https://api.github.com/users/hassoudi/orgs", "repos_url": "https://api.github.com/users/hassoudi/repos", "events_url": "https://api.github.com/users/hassoudi/events{/privacy}", "received_events_url": "https://api.github.com/users/hassoudi/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "I have a newer pull req", "I have a newer pull req" ]
1,604
1,604
1,604
CONTRIBUTOR
null
fix website address # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8296/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8296", "html_url": "https://github.com/huggingface/transformers/pull/8296", "diff_url": "https://github.com/huggingface/transformers/pull/8296.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8296.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8295
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8295/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8295/comments
https://api.github.com/repos/huggingface/transformers/issues/8295/events
https://github.com/huggingface/transformers/issues/8295
736,180,917
MDU6SXNzdWU3MzYxODA5MTc=
8,295
Validation data in `run_mlm.py` is the same as train data
{ "login": "GuillemGSubies", "id": 37592763, "node_id": "MDQ6VXNlcjM3NTkyNzYz", "avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GuillemGSubies", "html_url": "https://github.com/GuillemGSubies", "followers_url": "https://api.github.com/users/GuillemGSubies/followers", "following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}", "gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}", "starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions", "organizations_url": "https://api.github.com/users/GuillemGSubies/orgs", "repos_url": "https://api.github.com/users/GuillemGSubies/repos", "events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}", "received_events_url": "https://api.github.com/users/GuillemGSubies/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Very good catch! Thanks for pointing it out, the PR mentioned above should fix this." ]
1,604
1,604
1,604
CONTRIBUTOR
null
Inspecting the script I found the following: https://github.com/huggingface/transformers/blob/cb966e640b8b9d0f6e9c06c1655d078a917e5196/examples/language-modeling/run_mlm.py#L204 Am I missing something? Otherwise, I could send a PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8295/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8294
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8294/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8294/comments
https://api.github.com/repos/huggingface/transformers/issues/8294/events
https://github.com/huggingface/transformers/pull/8294
736,165,588
MDExOlB1bGxSZXF1ZXN0NTE1NDI2NjM0
8,294
pipelines: Tentative fix for AutoModel for PegasusConfig.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "After thinking about it a bit, I don't think the `PegasusForConditionalGeneration` should be going there. The `MODEL_MAPPING` is a mapping to all the headless models, i.e., the models that output hidden states without processing them in a head.\r\n\r\nIntroducing `PegasusForConditionalGeneration` would result in a mismatch between every single other models defined in that mapping and this newly added model. \r\n\r\nAdding `BartModel` would fail because of the configuration, as you've said, so imo the best thing to do here is to create a `PegasusModel` that inherits from `BartModel`, and use this in the `AutoModel`.", "Why is summarization pipeline using AutoModel? Shouldn't it require a model with a head?", "The pipeline is using `AutoModel` to load the weights to see if they load. It follows the current installed platform (PT or TF), but if both are installed, it first tries to load the checkpoint in `AutoModel`, and if it fails (wrongly formatted weights), it tries to load it in `TFAutoModel`.\r\n\r\nThis does mean that the model is loaded twice (once in `AutoModel` and another time in the appropriate auto model), which may not be the best performance-wise.\r\n\r\nThe easy fix here is to add a base model for Pegasus (and all models should have base models imo), the somewhat more robust fix is to load the checkpoint directly in the appropriate auto model.", "> This does mean that the model is loaded twice (once in AutoModel and another time in the appropriate auto model), which may not be the best performance-wise.\r\n\r\nYes this is not ideal. If there was simpler way do determine appropriate framework from config that would be much better. Or attempt the AutoModel way but without going the full way (stopping at checking filenames).\r\n\r\n> The easy fix here is to add a base model for Pegasus (and all models should have base models imo), the somewhat more robust fix is to load the checkpoint directly in the appropriate auto model.\r\n\r\nThat seems probably like the best solution. (at least in the short term)", "> and all models should have base models imo\r\n\r\nMarianMT, Pegasus, Blenderbot are all only published/trained/used for one task, why should they have base models?\r\n\r\nWhat ever happened to `config.architectures`? Would that help?", "Some configs (old ones maybe) don't have `architectures` defined.", "> The pipeline is using `AutoModel` to load the weights to see if they load. It follows the current installed platform (PT or TF), but if both are installed, it first tries to load the checkpoint in `AutoModel`, and if it fails (wrongly formatted weights), it tries to load it in `TFAutoModel`.\r\n\r\nJust a note that I'm not 100% sure that our design goal with Pipelines is to be able to load a model automatically in PT/TF without any user input (e.g. in case the model is only TF)\r\n\r\nBesides, in most cases you would have access to the huggingface.co model list API so you would know if model has PT/TF files.", "> Some configs (old ones maybe) don't have `architectures` defined.\r\n\r\nJust a note that we can always backport architectures into the hosted config files (will be easier with the new model versioning system)", "Obsolete." ]
1,604
1,614
1,614
CONTRIBUTOR
null
# What does this PR do? Original error lies in `pipeline(task='summarization', model='google/pegasus-xsum')`. - Code fails while trying to infer framework from model_name (str). - It attempts to determine framework by running `AutoModel.from_pretrained(..)` then `TFAutoModel.from_pretrained(...)` and decides by seeing whichever works first. Proposed fix by: - implementing `AutoModel.from_pretrained('google/pegasus-xsum')` that is a `PegasusConfig`. and returning a `PegasusForConditionalGeneration`. Not sure if that's desirable as we are loading a `ForConditionalGeneration` model by default (but it's the only available anyway). Other options that are available: - load `BartModel` (Pegasus inherits from BartForConditionalGeneration) from `PegasusConfig`, but unsure about side effects and odd to load `Bart` from `Pegasus`. - Change `get_framework` function from pipeline. That was my initial choice but it seems understanding if a config is for a TF or Pytorch model would require replicating some of `AutoModel` logic anyway so doing that would lead to a discrepancy between the 2 code paths just for Pegasus (and maybe BartConfig which also suffers some issues, but that will be in a follow-up PR). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik @sshleifer Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8294/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8294", "html_url": "https://github.com/huggingface/transformers/pull/8294", "diff_url": "https://github.com/huggingface/transformers/pull/8294.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8294.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8293
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8293/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8293/comments
https://api.github.com/repos/huggingface/transformers/issues/8293/events
https://github.com/huggingface/transformers/pull/8293
736,160,368
MDExOlB1bGxSZXF1ZXN0NTE1NDIyMjk1
8,293
[Generate Test] fix greedy generate test
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Ping @LysandreJik " ]
1,604
1,604
1,604
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> `greedy_search` test was flaky. This PR should fix it. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8293/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8293", "html_url": "https://github.com/huggingface/transformers/pull/8293", "diff_url": "https://github.com/huggingface/transformers/pull/8293.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8293.patch", "merged_at": 1604501076000 }
https://api.github.com/repos/huggingface/transformers/issues/8292
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8292/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8292/comments
https://api.github.com/repos/huggingface/transformers/issues/8292/events
https://github.com/huggingface/transformers/issues/8292
736,139,385
MDU6SXNzdWU3MzYxMzkzODU=
8,292
Fine Tune Bert Ner using TFBertForTokenClassification.from_pretrained
{ "login": "aks2193", "id": 7678330, "node_id": "MDQ6VXNlcjc2NzgzMzA=", "avatar_url": "https://avatars.githubusercontent.com/u/7678330?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aks2193", "html_url": "https://github.com/aks2193", "followers_url": "https://api.github.com/users/aks2193/followers", "following_url": "https://api.github.com/users/aks2193/following{/other_user}", "gists_url": "https://api.github.com/users/aks2193/gists{/gist_id}", "starred_url": "https://api.github.com/users/aks2193/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aks2193/subscriptions", "organizations_url": "https://api.github.com/users/aks2193/orgs", "repos_url": "https://api.github.com/users/aks2193/repos", "events_url": "https://api.github.com/users/aks2193/events{/privacy}", "received_events_url": "https://api.github.com/users/aks2193/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "What is your `metrics`?", "```py\r\noptimizer= AdamWeightDecay(\r\n learning_rate=5e-5,\r\n beta_1=0.9,\r\n beta_2=0.999,\r\n weight_decay_rate=0.01,\r\n epsilon=1e-6,\r\n exclude_from_weight_decay=['layer_norm', 'bias'])\r\noptimizer._HAS_AGGREGATE_GRAD = False\r\nloss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\r\nmetrics=[tf.keras.metrics.SparseCategoricalAccuracy(name=\"acc\")]\r\n```", "@jplu might know what's going on", "Hello @aks2193!\r\n\r\nSorry for this, but for now you cannot use `.compile()` + `.fit()` to train a Token Classification model. To make it short, this is because a layer is not used and then the gradients will be None, something that `.fit()` cannot handle.\r\n\r\nIf you want to train a NER I suggest you to use the [example](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_tf_ner.py).", "Hey @jplu \r\nThanks for the reply. I tried to follow the link and below is how I changed my code\r\n\r\nmodell = TFBertForTokenClassification.from_pretrained('bert-base-uncased',num_labels=len(tag2idx))\r\nmodell.layers[2].activation = tf.keras.activations.softmax\r\nmodell.layers[0].trainable = False\r\nmodell.compile(optimizer=optimizer, loss=loss, metrics=[metrics])\r\nmodell.fit(batch_train_data, epochs=epochs, validation_data=batch_val_data)\r\n\r\ndef compute_metrics(pred):\r\n labels = pred.label_ids\r\n preds = pred.predictions.argmax(-1)\r\n precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='binary')\r\n acc = accuracy_score(labels, preds)\r\n return {\r\n 'accuracy': acc,\r\n 'f1': f1,\r\n 'precision': precision,\r\n 'recall': recall\r\n }\r\ntraining_args = TFTrainingArguments(\r\n output_dir='./bert_test', # output directory\r\n num_train_epochs=5, # total # of training epochs\r\n per_device_train_batch_size=32, # batch size per device during training\r\n per_device_eval_batch_size=32, # batch size for evaluation\r\n warmup_steps=500, # number of warmup steps for learning rate scheduler\r\n weight_decay=0.01, # strength of weight decay\r\n logging_dir='./logs', # directory for storing logs\r\n learning_rate=3e-5,\r\n )\r\n\r\ntrainer = TFTrainer(\r\n model = modell,\r\n args=training_args,\r\n train_dataset=train_dataset,\r\n eval_dataset=val_dataset,\r\n compute_metrics=compute_metrics\r\n )\r\n \r\ntrainer.train()\r\n\r\nBut I am still getting the below error\r\n ValueError: Trying to create optimizer slot variable under the scope for tf.distribute.Strategy (<tensorflow.python.distribute.one_device_strategy.OneDeviceStrategy object at 0x7fb46c2dec50>), which is different from the scope used for the original variable (<tf.Variable 'tf_bert_for_token_classification_14/classifier/kernel:0' shape=(768, 10) dtype=float32\r\nMake sure the slot variables are created under the same strategy scope. This may happen if you're restoring from a checkpoint outside the scope\r\n\r\n\r\nThen I tried defining a strategy scope and include all the above code inside that\r\n\r\nstrategy = tf.distribute.OneDeviceStrategy(device=\"/cpu:0\")\r\nwith strategy.scope():\r\n The ABOVE CODE\r\n\r\nOn doing this getting the below error\r\n\r\nMixing different tf.distribute.Strategy objects: <tensorflow.python.distribute.one_device_strategy.OneDeviceStrategy object at 0x7fb446d98eb8> is not <tensorflow.python.distribute.one_device_strategy.OneDeviceStrategy object at 0x7fb45fe2d7b8>\r\n\r\nHow does this exactly work?\r\nHow do I define the strategy scope for all this calculations?", "Hey @aks2193 \r\nI'm facing the same problem. Please let me know if you find a solution.", "@aks2193 \r\nTry replacing `strategy = tf.distribute.OneDeviceStrategy(device=\"/cpu:0\")` to `training_args.strategy.scope()`. Worked for me.", "@alibi123 has right, you are not properly instanciate your model. Please use the example as it is. You example won't work as well with the `TFTrainer` if you are setting the activation to `softmax` because we don't compute the loss from the logits.", "@jplu I have a question about restoring weights from the checkpoint. How to do it correctly?\r\n\r\nThis is how I try to load weights:\r\n```\r\n>>> model = TFBertForTokenClassification.from_pretrained(settings.BERT_NAME)\r\n>>> model.load_weights('/models/exp1/checkpoint/ckpt-55')\r\n```\r\nI get very long exception message starting with:\r\n```\r\nNothing except the root object matched a checkpointed value. Typically this means that the checkpoint does not match the Python program. The following objects have no matching checkpointed value: [<tf.Variable 't\r\n```\r\n\r\nHere's my training code:\r\n```\r\n dataset = get_dataset(in_fn, debug)\r\n args = TFTrainingArguments(\r\n os.path.join(os.path.join(settings.MODELS_DIR, save_name)),\r\n overwrite_output_dir=True,\r\n do_train=True,\r\n logging_dir=os.path.join(settings.DATA_DIR, 'exp1_logs'),\r\n save_total_limit=2,\r\n )\r\n with args.strategy.scope():\r\n model = TFBertForTokenClassification.from_pretrained(settings.BERT_NAME)\r\n trainer = TFTrainer(\r\n model=model,\r\n args=args,\r\n train_dataset=dataset,\r\n )\r\n trainer.train()\r\n```\r\n`BERT_NAME = 'bert-base-multilingual-cased'`\r\n\r\nI was also trying to use ckpt path in `.from_pretrained()` but also got errors regarding format.\r\n", "You cannot use Keras `load_weights` on a TF checkpoint. If you want to load your model you just have to use the path where you saved your model `model = TFBertForTokenClassification.from_pretrained(\"my_output_dir\")`", "@jplu Thank you for the quick response.\r\nI've tried that in the first place, but get an error as well. \r\n\r\nThis is my output dir:\r\n```\r\ncheckpoint$ ls\r\ncheckpoint ckpt-55.data-00000-of-00001 ckpt-55.index ckpt-56.data-00000-of-00001 ckpt-56.index\r\n```\r\nHere are my attempts with error messages:\r\n1:\r\n```\r\nconfig = BertConfig.from_pretrained(settings.BERT_NAME)\r\n>>> model = TFBertForTokenClassification.from_pretrained('/models/exp1/checkpoint', config=config)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py\", line 653, in from_pretrained\r\n [WEIGHTS_NAME, TF2_WEIGHTS_NAME], pretrained_model_name_or_path\r\nOSError: Error no file named ['pytorch_model.bin', 'tf_model.h5'] found in directory /models/exp1/checkpoint or `from_pt` set to False\r\n```\r\n2:\r\n```\r\n>>> model = TFBertForTokenClassification.from_pretrained('/models/exp1/checkpoint/ckpt-55', config=config)\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py\", line 711, in from_pretrained\r\n load_tf_weights(model, resolved_archive_file)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py\", line 268, in load_tf_weights\r\n with h5py.File(resolved_archive_file, \"r\") as f:\r\n File \"/usr/local/lib/python3.6/dist-packages/h5py/_hl/files.py\", line 408, in __init__\r\n swmr=swmr)\r\n File \"/usr/local/lib/python3.6/dist-packages/h5py/_hl/files.py\", line 173, in make_fid\r\n fid = h5f.open(name, flags, fapl=fapl)\r\n File \"h5py/_objects.pyx\", line 54, in h5py._objects.with_phil.wrapper\r\n File \"h5py/_objects.pyx\", line 55, in h5py._objects.with_phil.wrapper\r\n File \"h5py/h5f.pyx\", line 88, in h5py.h5f.open\r\nOSError: Unable to open file (file signature not found)\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py\", line 714, in from_pretrained\r\n \"Unable to load weights from h5 file. \"\r\nOSError: Unable to load weights from h5 file. If you tried to load a TF 2.0 model from a PyTorch checkpoint, please set from_pt=True.\r\n```\r\n3: I've even tried `from_pt=True` even though I used TFTrainer and TFBert\r\n```\r\n>>> model = TFBertForTokenClassification.from_pretrained('/models/exp1/checkpoint/ckpt-55', config=config, from_pt=True)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py\", line 703, in from_pretrained\r\n return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_pytorch_utils.py\", line 89, in load_pytorch_checkpoint_in_tf2_model\r\n pt_state_dict = torch.load(pt_path, map_location=\"cpu\")\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/serialization.py\", line 595, in load\r\n return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/serialization.py\", line 764, in _legacy_load\r\n magic_number = pickle_module.load(f, **pickle_load_args)\r\n_pickle.UnpicklingError: invalid load key, '\\x00'.\r\n```\r\n4: I've also tried to add config.json into the output dir\r\n```\r\n>>> model = TFBertForTokenClassification.from_pretrained('/models/exp1/checkpoint')\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py\", line 653, in from_pretrained\r\n [WEIGHTS_NAME, TF2_WEIGHTS_NAME], pretrained_model_name_or_path\r\nOSError: Error no file named ['pytorch_model.bin', 'tf_model.h5'] found in directory /models/exp1/checkpoint or `from_pt` set to False\r\n```", "As I said, these are normal errors because: **You cannot use Keras load_weights on a TF checkpoint.** You have to use your output dir not the file or the checkpoint dir: `model = TFBertForTokenClassification.from_pretrained('/models/exp1')`.", "@jplu Sorry for bothering. But still doesn't work. It expects `'pytorch_model.bin', 'tf_model.h5'`.\r\n```\r\n>>> model = TFBertForTokenClassification.from_pretrained('/models/exp1')\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py\", line 653, in from_pretrained\r\n [WEIGHTS_NAME, TF2_WEIGHTS_NAME], pretrained_model_name_or_path\r\nOSError: Error no file named ['pytorch_model.bin', 'tf_model.h5'] found in directory /models/exp1 or `from_pt` set to False\r\n\r\n>>> model = TFBertForTokenClassification.from_pretrained('/models/exp1', from_pt=True)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py\", line 653, in from_pretrained\r\n [WEIGHTS_NAME, TF2_WEIGHTS_NAME], pretrained_model_name_or_path\r\nOSError: Error no file named ['pytorch_model.bin', 'tf_model.h5'] found in directory /models/exp1 or `from_pt` set to False\r\n\r\n```\r\n\r\nBut my output_dir only contains `checkpoint` dir", "This is because you are trying to load a PyTorch model into a TensorFlow one with `from_pt=True`, remove this parameter. If not working it means that your models have not been properly saved.\r\n\r\nDid you call the `save` method of the trainer?", "No, I haven't. Sorry, my bad. I thought that I can use checkpoints. \r\nThanks for your help!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
Hey, I am new to the transformers Bert Training world and am trying to fine tune Bert model for NER on a coNll like dataset. But its not training the model and giving the below error ValueError: No gradients provided for any variable: ['tf_bert_for_token_classification_8/classifier/kernel:0', 'tf_bert_for_token_classification_8/classifier/bias:0']. Below is my code ```py tr_inputs = tf.convert_to_tensor(tr_inputs) val_inputs = tf.convert_to_tensor(val_inputs) tr_tags = tf.convert_to_tensor(tr_tags) val_tags = tf.convert_to_tensor(val_tags) tr_masks = tf.convert_to_tensor(tr_masks) val_masks = tf.convert_to_tensor(val_masks) tr_segs = tf.convert_to_tensor(tr_segs) val_segs = tf.convert_to_tensor(val_segs) input_features_dict = {"input_ids":tr_inputs, "attention_mask":tr_masks, "token_type_ids":tr_segs, 'labels':tr_tags} val_features_dict = {"input_ids":val_inputs, "attention_mask":val_masks, "token_type_ids":val_segs, 'labels':tr_tags} train_data = tf.data.Dataset.from_tensor_slices(input_features_dict) batch_train_data = train_data.batch(batch_num) valid_data = tf.data.Dataset.from_tensor_slices(val_features_dict) batch_valid_data = valid_data.batch(batch_num) modell = TFBertForTokenClassification.from_pretrained('bert-base-uncased',num_labels=len(tag2idx)) modell.layers[2].activation = tf.keras.activations.softmax modell.layers[0].trainable = False modell.compile(optimizer=optimizer, loss=loss, metrics=[metrics]) modell.fit(batch_train_data, epochs=epochs, validation_data=batch_val_data) ``` Not sure what needs to be done. Any advice/pointers on this would be highly helpful for me.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8292/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8291
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8291/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8291/comments
https://api.github.com/repos/huggingface/transformers/issues/8291/events
https://github.com/huggingface/transformers/issues/8291
736,122,787
MDU6SXNzdWU3MzYxMjI3ODc=
8,291
could you please give me a torch example of xlm-roberta-(base/large) for multilingual-text question?
{ "login": "wmathor", "id": 32392878, "node_id": "MDQ6VXNlcjMyMzkyODc4", "avatar_url": "https://avatars.githubusercontent.com/u/32392878?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wmathor", "html_url": "https://github.com/wmathor", "followers_url": "https://api.github.com/users/wmathor/followers", "following_url": "https://api.github.com/users/wmathor/following{/other_user}", "gists_url": "https://api.github.com/users/wmathor/gists{/gist_id}", "starred_url": "https://api.github.com/users/wmathor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wmathor/subscriptions", "organizations_url": "https://api.github.com/users/wmathor/orgs", "repos_url": "https://api.github.com/users/wmathor/repos", "events_url": "https://api.github.com/users/wmathor/events{/privacy}", "received_events_url": "https://api.github.com/users/wmathor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,605
1,605
CONTRIBUTOR
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details I read the document of [XLM-RoBERTa](https://huggingface.co/transformers/model_doc/xlmroberta.html#overview), but i also have a lots of question. For example, `should be able to determine the correct language from the input ids.` how to determine? If you have a example for how to use xlm-roberta about multilingual text question, please show me, Thank you very much!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8291/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8290
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8290/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8290/comments
https://api.github.com/repos/huggingface/transformers/issues/8290/events
https://github.com/huggingface/transformers/issues/8290
736,103,678
MDU6SXNzdWU3MzYxMDM2Nzg=
8,290
finetuning T5 on translation on TPU, questions about clarifying the setup
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "in the main page of examples you mention one can pass any script to run it on tpu, but inside the seq2seq it seems one needs to use finetune_trainer and not finetune.py for tpus, I am confused which one to use, thanks for your help", "@sshleifer @patil-suraj Maybe we could improve the documentation here", "Hi, thank you @LysandreJik, do you know which version of finetune.py to finetune_trainer.py are working with tpus? in the documentation it is written any example can be run with multiple tpus by using xla_spawn.py but I am not sure if this is true for finetune.py too. thanks ", "`finetune_trainer.py` works with TPU, here is [the wmt script](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/builtin_trainer/finetune_tpu.sh)\r\n\r\n\r\n1) They were developed at different times. We are trying to get them both working well.\r\n2) finetune.py should not be used with TPU at all.\r\n3) yes, see script.\r\n4) see script\r\n5) see script\r\n6) see script\r\n7) I would guess that those samplers don't work on TPU.\r\n8) No it does not.\r\n9) No idea, maybe @patil-suraj knows.\r\n\r\n\r\n", "thanks Sam for this, so finetune.py does not work? In the documentation,\nthis is written one can run all examples with xla_spawn on TPU, I\nappreciate updating the README mentioning it.\nthank you.\n\nOn Wed, Nov 4, 2020 at 8:06 PM Sam Shleifer <[email protected]>\nwrote:\n\n> finetune_trainer.py works with TPU, cc @patil-suraj\n> <https://github.com/patil-suraj> .\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8290#issuecomment-721917741>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHH345RVTXO3EA7ZUPUTSOGQ3XANCNFSM4TKBDLQQ>\n> .\n>\n", "Correct, updated docs.", "thank you Sam\n\nOn Wed, Nov 4, 2020 at 9:46 PM Sam Shleifer <[email protected]>\nwrote:\n\n> Correct, updated docs.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8290#issuecomment-721966350>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHH4QZWN6KEL2BKSF7BTSOG4TVANCNFSM4TKBDLQQ>\n> .\n>\n" ]
1,604
1,604
1,604
NONE
null
Hi, I'd like to run finetune.py with wmt dataset in TPU to train from scratch not finetune. I appreciate the response to some questions: 1) Why there are two versions of fine-tuning using Seq2SeqTrainer and finetune.py and which one is suitable for my usecase? 2) Seq2SeqTrainer does not support predict on TPU, is this the case for finetune.py as well? 3) when running on TPU, it is written to use xla_spawn.py, since the codes of finetune.py are with pytorch lightening, is this needed to launch them with xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)? 4) in the finetune.py dataloader, I see this is distributed based on number of gpus, but I cannot see this is distributed also when one uses tpus, Is this taking care of making dataloader distributed in case of using TPU automatically? 5) If using finetune.py with TPU, is there any specific setup I need to add for finetuning/training T5 on WMT? 6) I assume one need to use maybe something like this as a sampler for TPU distributed dataloaders, I see this is not the case in the codes of finetune.py, does data parallelism works in finetune.py? 7) when to use sortish_sampler/dynamic_sampler, do they work on TPUs? ``` sampler = torch.utils.data.distributed.DistributedSampler( dataset, num_replicas=xm.xrt_world_size(), rank=xm.get_ordinal(), shuffle=True) ``` 8. fp16 does it work with TPUs as well? 9. with iterable datasets seems dataloader in finetune_trainer is not working with distributed detup on TPU, do you know how to implement it? am I missing something? thanks thank you very much.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8290/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8289
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8289/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8289/comments
https://api.github.com/repos/huggingface/transformers/issues/8289/events
https://github.com/huggingface/transformers/issues/8289
736,073,206
MDU6SXNzdWU3MzYwNzMyMDY=
8,289
Why do I use XLMRobertaTokenizer and return an error on token_type_ids?
{ "login": "wmathor", "id": 32392878, "node_id": "MDQ6VXNlcjMyMzkyODc4", "avatar_url": "https://avatars.githubusercontent.com/u/32392878?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wmathor", "html_url": "https://github.com/wmathor", "followers_url": "https://api.github.com/users/wmathor/followers", "following_url": "https://api.github.com/users/wmathor/following{/other_user}", "gists_url": "https://api.github.com/users/wmathor/gists{/gist_id}", "starred_url": "https://api.github.com/users/wmathor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wmathor/subscriptions", "organizations_url": "https://api.github.com/users/wmathor/orgs", "repos_url": "https://api.github.com/users/wmathor/repos", "events_url": "https://api.github.com/users/wmathor/events{/privacy}", "received_events_url": "https://api.github.com/users/wmathor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I also have this question" ]
1,604
1,640
1,604
CONTRIBUTOR
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details ```python encoded_pair = self.tokenizer(sent_ko, sent_cn, padding='max_length', # Pad to max_length truncation=True, # Truncate to max_length max_length=self.maxlen, return_tensors='pt') # Return torch.Tensor objects token_ids = encoded_pair['input_ids'].squeeze(0) attn_masks = encoded_pair['attention_mask'].squeeze(0) token_type_ids = encoded_pair['token_type_ids'].squeeze(0) ``` ``` File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 234, in __getitem__ return self.data[item] KeyError: 'token_type_ids' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8289/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8288
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8288/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8288/comments
https://api.github.com/repos/huggingface/transformers/issues/8288/events
https://github.com/huggingface/transformers/issues/8288
736,058,142
MDU6SXNzdWU3MzYwNTgxNDI=
8,288
Training T5-large model for Question Answering
{ "login": "dulanafdo", "id": 67677823, "node_id": "MDQ6VXNlcjY3Njc3ODIz", "avatar_url": "https://avatars.githubusercontent.com/u/67677823?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dulanafdo", "html_url": "https://github.com/dulanafdo", "followers_url": "https://api.github.com/users/dulanafdo/followers", "following_url": "https://api.github.com/users/dulanafdo/following{/other_user}", "gists_url": "https://api.github.com/users/dulanafdo/gists{/gist_id}", "starred_url": "https://api.github.com/users/dulanafdo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dulanafdo/subscriptions", "organizations_url": "https://api.github.com/users/dulanafdo/orgs", "repos_url": "https://api.github.com/users/dulanafdo/repos", "events_url": "https://api.github.com/users/dulanafdo/events{/privacy}", "received_events_url": "https://api.github.com/users/dulanafdo/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "This notebook should help: https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
Are there are any specific documents that I can follow, to do the training of the t5 model for Question answering? I found this (https://huggingface.co/transformers/custom_datasets.html#qa-squad) on your website and it is not allowing me to use t5 model instead of DistilBert.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8288/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8287
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8287/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8287/comments
https://api.github.com/repos/huggingface/transformers/issues/8287/events
https://github.com/huggingface/transformers/pull/8287
736,029,261
MDExOlB1bGxSZXF1ZXN0NTE1MzE0MDAw
8,287
Fix typo in language-modeling README.md
{ "login": "gpengzhi", "id": 16913241, "node_id": "MDQ6VXNlcjE2OTEzMjQx", "avatar_url": "https://avatars.githubusercontent.com/u/16913241?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gpengzhi", "html_url": "https://github.com/gpengzhi", "followers_url": "https://api.github.com/users/gpengzhi/followers", "following_url": "https://api.github.com/users/gpengzhi/following{/other_user}", "gists_url": "https://api.github.com/users/gpengzhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/gpengzhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gpengzhi/subscriptions", "organizations_url": "https://api.github.com/users/gpengzhi/orgs", "repos_url": "https://api.github.com/users/gpengzhi/repos", "events_url": "https://api.github.com/users/gpengzhi/events{/privacy}", "received_events_url": "https://api.github.com/users/gpengzhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? Fix the typo in `README.md` in the `language-modeling` folder. <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8287/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8287", "html_url": "https://github.com/huggingface/transformers/pull/8287", "diff_url": "https://github.com/huggingface/transformers/pull/8287.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8287.patch", "merged_at": 1604500683000 }
https://api.github.com/repos/huggingface/transformers/issues/8286
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8286/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8286/comments
https://api.github.com/repos/huggingface/transformers/issues/8286/events
https://github.com/huggingface/transformers/pull/8286
736,006,333
MDExOlB1bGxSZXF1ZXN0NTE1Mjk0OTE1
8,286
Improve QA pipeline error handling
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? - The issue is that with previous code we would have the following: ```python qa_pipeline = (...) qa_pipeline(question="Where was he born ?", context="") -> IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) ``` The goal here is to improve this to actually return a ValueError wherever possible. While at it, I tried to simplify QuestionArgumentHandler's code to make it smaller and more compat while keeping backward compat. Quick note: For the tests, I feel they would be more readable if it was possible to write ```python self.assertEqual(qa(.....), [SquadExample(None, Q, C, None, None,...)]) ``` as it would cover both types, and length and deep equality. However, it's not possible because SquadExample does not implement `__eq__`. It felt out of scope, but if reviewers think it would be a nice addition, I'd be happy to implement it and change the test. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @mfuntowicz @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8286/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8286", "html_url": "https://github.com/huggingface/transformers/pull/8286", "diff_url": "https://github.com/huggingface/transformers/pull/8286.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8286.patch", "merged_at": 1604507443000 }
https://api.github.com/repos/huggingface/transformers/issues/8285
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8285/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8285/comments
https://api.github.com/repos/huggingface/transformers/issues/8285/events
https://github.com/huggingface/transformers/issues/8285
735,971,462
MDU6SXNzdWU3MzU5NzE0NjI=
8,285
RAG performance on Open-NQ dataset much lower than expected
{ "login": "gaobo1987", "id": 63237333, "node_id": "MDQ6VXNlcjYzMjM3MzMz", "avatar_url": "https://avatars.githubusercontent.com/u/63237333?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gaobo1987", "html_url": "https://github.com/gaobo1987", "followers_url": "https://api.github.com/users/gaobo1987/followers", "following_url": "https://api.github.com/users/gaobo1987/following{/other_user}", "gists_url": "https://api.github.com/users/gaobo1987/gists{/gist_id}", "starred_url": "https://api.github.com/users/gaobo1987/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gaobo1987/subscriptions", "organizations_url": "https://api.github.com/users/gaobo1987/orgs", "repos_url": "https://api.github.com/users/gaobo1987/repos", "events_url": "https://api.github.com/users/gaobo1987/events{/privacy}", "received_events_url": "https://api.github.com/users/gaobo1987/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
[ "Maybe @lhoestq @patrickvonplaten have an idea", "Hey @gaobo1987,\r\n\r\nWe checked that the models match the performance as reported in the paper. \r\n\r\nDid you run the model as stated in https://github.com/huggingface/transformers/blob/master/examples/rag/README.md ? ", "Which index did you use exactly with wiki_dpr ? This EM value is expected if you used the `compressed` one. For the `exact` one you might need to increase the efSearch parameter of the index. I ran some indexing experiments recently and I'll update the default parameters of the wiki_dpr index with the optimized ones that reproduce RAG's paper results.\r\n\r\nEDIT: they've been updated a few weeks ago", "> Hey @gaobo1987,\r\n> \r\n> We checked that the models match the performance as reported in the paper.\r\n> \r\n> Did you run the model as stated in https://github.com/huggingface/transformers/blob/master/examples/rag/README.md ?\r\n\r\nThanks for your reply @patrickvonplaten ,\r\n\r\nwe did not use the example run script there, but followed the code snippets provided in the huggingface documentation:\r\n\r\n```python\r\nfrom transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration\r\nimport torch\r\ntokenizer = RagTokenizer.from_pretrained(\"facebook/rag-sequence-nq\")\r\nretriever = RagRetriever.from_pretrained(\"facebook/rag-sequence-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\n# initialize with RagRetriever to do everything in one forward call\r\nmodel = RagSequenceForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever)\r\ninput_dict = tokenizer.prepare_seq2seq_batch(\"How many people live in Paris?\", \"In Paris, there are 10 million people.\", return_tensors=\"pt\")\r\ninput_ids = input_dict[\"input_ids\"]\r\noutputs = model(input_ids=input_ids, labels=input_dict[\"labels\"])\r\n# or use retriever seperately\r\nmodel = RagSequenceForGeneration.from_pretrained(\"facebook/rag-sequence-nq\", use_dummy_dataset=True)\r\n# 1. Encode\r\nquestion_hidden_states = model.question_encoder(input_ids)[0]\r\n# 2. Retrieve\r\ndocs_dict = retriever(input_ids.numpy(), question_hidden_states.detach().numpy(), return_tensors=\"pt\")\r\ndoc_scores = torch.bmm(question_hidden_states.unsqueeze(1), docs_dict[\"retrieved_doc_embeds\"].float().transpose(1, 2)).squeeze(1)\r\n# 3. Forward to generator\r\noutputs = model(context_input_ids=docs_dict[\"context_input_ids\"], context_attention_mask=docs_dict[\"context_attention_mask\"], doc_scores=doc_scores, decoder_input_ids=input_dict[\"labels\"])\r\n```\r\nsee here: https://huggingface.co/transformers/model_doc/rag.html#ragsequenceforgeneration\r\n\r\nWe did use our own evaluation script for computing EM scores.\r\n\r\nIn general, we tried to follow the prescribed steps from official source as exactly as possible, as for the customized EM calculation, difference may arise there, but I believe the main source of performance difference lies somewhere else.", "> Which index did you use exactly with wiki_dpr ? This EM value is expected if you used the `compressed` one. For the `exact` one you might need to increase the efSearch parameter of the index. I ran some indexing experiments recently and I'll update the default parameters of the wiki_dpr index with the optimised ones that reproduce RAG's paper results.\r\n\r\nthanks for the reply @lhoestq , we used the \"exact\" mode of the wiki_dpr index, indeed, we haven't tried the \"compressed\" mode, nor did we tune the \"exact\" index. Thanks for the update, we will check the \"compressed\" alternative, and the parameter tuning of the \"exact\" index. Also great to know that you will update the default parameters!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi, to provide an update on this issue. Recently I refactored my own RAG code based transformers-4.1.1, and obtained EM=40.7 performance on the open NQ dataset with rag-sequence-nq model (n_beams=4) and FAISS HNSW index with n_docs=5, efSearch=256 and efConstruction=200. Unfortunately it still didn't reach the expected 44.5 score. Are these sound parameters? Am I missing any? What is the best parameter combination used at Huggingface? Any advice is much appreciated, thanks! (Note that I couldn't use the original rag code as there is firewall restrictions on my server that prevented downloading the wiki_dpr.py script as well the arrow files for exact indexing, so I have to download these files on a much less powerful laptop and upload them to my server. Consequently, I am using a modified version of RagSequenceForGeneration along with a modified RagRetriever) @lhoestq ", "@gaobo1987 \r\nCan you please share how exactly you played around with the efSearch and efConstruction parameters?\r\n\r\nAs in where in the code did you make the changes??", "hello @krishanudb , thanks for your reply. What I did is merely manually downloading the wiki_dpr-train.arrow file, then use it to construct a faiss hnsw index with efSearch=256, efConstruction=200, then save this index to disk. I wrote a wrapper around RagRetriever and RagSequenceForGeneration respectively so that rag can run directly on the aforementioned faiss index, instead of relying on huggingFace.Datasets utilities and other caching sub-routines. I did not change the models in any way. Could you provide an answer to my question regarding the best combination of parameters from huggingFace to reach the performance as reported in the original paper? Thanks for your time", "@gaobo1987 \r\nThere are several versions of the DPR model (single-nq vs multiset) as well as the precomputed passages wiki_dpr\r\nI am not sure which one the authors used to get 44% EM but I think they have used the single-nq models for the tasks.\r\n\r\nMake sure that you are using the 'right; model. Maybe the authors can shed more light on this..\r\n\r\nEven I am facing the same issue... Not getting more than 40% EM no matter if I use the multiset or the nq-single models..", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,619
1,619
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> One peculiar finding is that when we ran the rag-sequence-nq model along with the provided wiki_dpr index, all models and index files were used as is, on the open-NQ test split (3610 questions, https://github.com/google-research-datasets/natural-questions/tree/master/nq_open), we observed EM=27.2 performance, which was rather different from that in the paper, namely EM=44.5. We are baffled. Has anyone seen lower performance using the transformers RAG models? <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8285/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8284
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8284/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8284/comments
https://api.github.com/repos/huggingface/transformers/issues/8284/events
https://github.com/huggingface/transformers/issues/8284
735,841,973
MDU6SXNzdWU3MzU4NDE5NzM=
8,284
[rag] missing a working End-to-end evaluation example
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
[ "@stas00 \r\n\r\nCan you please write a test code for **finetune.sh**. ", "As you can see I'm waiting for this ticket to be addressed before I'm able to write the tests. \r\n\r\nPerhaps you can address that, and then I will have all the info needed to write the tests.", "Until then please file a normal issue about it. I haven't done any rag work yet, so that's why I'm asking for support.", "@lhoestq is working on this at the moment :-) ", "Actually I'm working on the finetuning script example, not eval ;)\r\nBut maybe this can help with adding a test for the eval script example.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "stale" ]
1,604
1,616
1,616
CONTRIBUTOR
null
I'm going to try to write tests for `examples/rag` (https://github.com/huggingface/transformers/issues/7715), but first I'm trying to figure out how it works. Would it be possible to add a full `End-to-end evaluation` invocation example in https://github.com/huggingface/transformers/blob/master/examples/rag/README.md#end-to-end-evaluation? i.e. with the correct data. I tested https://github.com/huggingface/transformers/blob/master/examples/rag/README.md#retrieval-evaluation and it worked, but if I try to adapt the same params for e2e it crashes with: ``` $ python eval_rag.py --model_name_or_path facebook/rag-sequence-nq --model_type rag_sequence \ --evaluation_set output/biencoder-nq-dev.questions --gold_data_path output/biencoder-nq-dev.pages \ --predictions_path output/retrieval_preds.tsv --eval_mode e2e --gold_data_mode qa --n_docs 5 \ --print_predictions 2020-11-03 22:07:33.124277: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 INFO:__main__:Evaluate the following checkpoints: ['facebook/rag-sequence-nq'] INFO:__main__:Calculating metrics based on an existing predictions file: output/retrieval_preds.tsv Traceback (most recent call last): File "eval_rag.py", line 314, in <module> main(args) File "eval_rag.py", line 280, in main score_fn(args, args.predictions_path, args.gold_data_path) File "eval_rag.py", line 46, in get_scores data = pd.read_csv(gold_data_path, sep="\t", header=None) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pandas/io/parsers.py", line 686, in read_csv return _read(filepath_or_buffer, kwds) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pandas/io/parsers.py", line 458, in _read data = parser.read(nrows) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pandas/io/parsers.py", line 1196, in read ret = self._engine.read(nrows) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pandas/io/parsers.py", line 2155, in read data = self._reader.read(nrows) File "pandas/_libs/parsers.pyx", line 847, in pandas._libs.parsers.TextReader.read File "pandas/_libs/parsers.pyx", line 862, in pandas._libs.parsers.TextReader._read_low_memory File "pandas/_libs/parsers.pyx", line 918, in pandas._libs.parsers.TextReader._read_rows File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._tokenize_rows File "pandas/_libs/parsers.pyx", line 2042, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: Expected 5 fields in line 2, saw 6 ``` I think it needs a different input data. And we need 2 functional examples: for `qa` and `ans` each. I can handle adding this to the doc if you tell me what to add. Thanks. @patrickvonplaten, @lhoestq
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8284/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8283
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8283/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8283/comments
https://api.github.com/repos/huggingface/transformers/issues/8283/events
https://github.com/huggingface/transformers/pull/8283
735,823,879
MDExOlB1bGxSZXF1ZXN0NTE1MTQzNzE1
8,283
[tokenizers] convert_to_tensors: don't reconvert when the type is already right
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "ping", "Looks good to me. Thanks for handling this one @stas00 and sorry for the delay." ]
1,604
1,605
1,605
CONTRIBUTOR
null
I was trying to fix this warning: ``` src/transformers/tokenization_utils_base.py:608: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). tensor = as_tensor(value) ``` which appeared when running: ``` python eval_rag.py --model_name_or_path facebook/rag-sequence-nq --model_type rag_sequence --evaluation_set output/biencoder-nq-dev.questions --gold_data_path output/biencoder-nq-dev.pages --predictions_path output/retrieval_preds.tsv --eval_mode retrieval --k 1 ``` This appears to have happened since `convert_to_tensors` was called with data which was already a tensor of the right type. * [x] and ended up fixing it for pt and also adding the same fix for tf/jax/np. basically skip the conversion if the value is already of the required type and avoid the pytorch warning. * [x] added tests for converting the already converted * [x] while at it added a missing test for `test_batch_encoding_with_labels_jax` I understand `lambda` isn't welcome, so I had to define a few helper functions for numpy/jax. `partial` would have done the trick, but `isinstance` doesn't accept keyword args. @LysandreJik, @mfuntowicz
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8283/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8283", "html_url": "https://github.com/huggingface/transformers/pull/8283", "diff_url": "https://github.com/huggingface/transformers/pull/8283.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8283.patch", "merged_at": 1605816362000 }
https://api.github.com/repos/huggingface/transformers/issues/8282
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8282/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8282/comments
https://api.github.com/repos/huggingface/transformers/issues/8282/events
https://github.com/huggingface/transformers/pull/8282
735,766,008
MDExOlB1bGxSZXF1ZXN0NTE1MDk2OTk3
8,282
[blenderbot] regex fix
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
This PR fixes: ``` src/transformers/tokenization_blenderbot.py:163: DeprecationWarning: invalid escape sequence \s token = re.sub("\s{2,}", " ", token) ``` @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8282/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8282", "html_url": "https://github.com/huggingface/transformers/pull/8282", "diff_url": "https://github.com/huggingface/transformers/pull/8282.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8282.patch", "merged_at": 1604498548000 }
https://api.github.com/repos/huggingface/transformers/issues/8281
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8281/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8281/comments
https://api.github.com/repos/huggingface/transformers/issues/8281/events
https://github.com/huggingface/transformers/pull/8281
735,759,534
MDExOlB1bGxSZXF1ZXN0NTE1MDkxOTQ5
8,281
Create README.md
{ "login": "RamonMamon", "id": 35195972, "node_id": "MDQ6VXNlcjM1MTk1OTcy", "avatar_url": "https://avatars.githubusercontent.com/u/35195972?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RamonMamon", "html_url": "https://github.com/RamonMamon", "followers_url": "https://api.github.com/users/RamonMamon/followers", "following_url": "https://api.github.com/users/RamonMamon/following{/other_user}", "gists_url": "https://api.github.com/users/RamonMamon/gists{/gist_id}", "starred_url": "https://api.github.com/users/RamonMamon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RamonMamon/subscriptions", "organizations_url": "https://api.github.com/users/RamonMamon/orgs", "repos_url": "https://api.github.com/users/RamonMamon/repos", "events_url": "https://api.github.com/users/RamonMamon/events{/privacy}", "received_events_url": "https://api.github.com/users/RamonMamon/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[ { "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false } ]
[ "Very cool. Is possible, can you add metadata as described in https://huggingface.co/docs#what-metadata-can-i-add-to-my-model-card?" ]
1,604
1,607
1,607
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8281/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8281", "html_url": "https://github.com/huggingface/transformers/pull/8281", "diff_url": "https://github.com/huggingface/transformers/pull/8281.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8281.patch", "merged_at": 1607697690000 }
https://api.github.com/repos/huggingface/transformers/issues/8280
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8280/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8280/comments
https://api.github.com/repos/huggingface/transformers/issues/8280/events
https://github.com/huggingface/transformers/issues/8280
735,748,699
MDU6SXNzdWU3MzU3NDg2OTk=
8,280
Translation finetuning error : TypeError: '>' not supported between instances of 'function' and 'int'
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The issue solved with setting --gpus 1 explicitly. thanks. " ]
1,604
1,604
1,604
NONE
null
Dear huggingface team, I'd like to train from scratch T5 on wmt19 (de-en), and I see these instructions in your page: - you provided the script for finetune mbart_cc25, could I just change the model path and it works out of the box for training T5 on a translation task? any changes needed? - when you use sortish sampler (line 256 finetune.py) you check the number of gpus, in case using tpus, shall I check the number of cores of tpus for distributed version of dataloader in line 256? - does distributed tpu training works for seq2seq model? I wonder why the dataloader is not modified for tpu cores, is this by purpose and works fine for tpus too? - I also gets these errors running the provided script, thank you for your help. Best Rabeeh ``` (test) rabeeh@brain1:~/ruse/hf/transformers/examples/seq2seq$ ./train_mbart_cc25_enro.sh --output_dir enro_finetune_baseline --label_smoothing 0.1 --fp16_opt_level=O1 --logger_name wandb --sortish_sampler 2020-11-04 01:41:53.720772: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/nccl2/lib:/usr/local/cuda/extras/CUPTI/lib64 2020-11-04 01:41:53.720823: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "finetune.py", line 442, in <module> main(args) File "finetune.py", line 383, in main model: SummarizationModule = TranslationModule(args) File "finetune.py", line 367, in __init__ super().__init__(hparams, **kwargs) File "finetune.py", line 57, in __init__ if hparams.sortish_sampler and hparams.gpus > 1: TypeError: '>' not supported between instances of 'function' and 'int' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8280/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8279
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8279/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8279/comments
https://api.github.com/repos/huggingface/transformers/issues/8279/events
https://github.com/huggingface/transformers/issues/8279
735,743,498
MDU6SXNzdWU3MzU3NDM0OTg=
8,279
Finetuning T5 on translation wmt19(de-en)
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
NONE
null
Dear huggingface team, I'd like to train from scratch T5 on wmt19 (de-en), and I see these instructions in your page: you provided the script for finetune mbart_cc25, could I just change the model path and it works out of the box for training T5 on a translation task? I also gets these errors running the provided script, thank you for your help. Best Rabeeh ``` (test) rabeeh@brain1:~/ruse/hf/transformers/examples/seq2seq$ ./train_mbart_cc25_enro.sh --output_dir enro_finetune_baseline --label_smoothing 0.1 --fp16_opt_level=O1 --logger_name wandb --sortish_sampler 2020-11-04 01:41:53.720772: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/nccl2/lib:/usr/local/cuda/extras/CUPTI/lib64 2020-11-04 01:41:53.720823: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "finetune.py", line 442, in <module> main(args) File "finetune.py", line 383, in main model: SummarizationModule = TranslationModule(args) File "finetune.py", line 367, in __init__ super().__init__(hparams, **kwargs) File "finetune.py", line 57, in __init__ if hparams.sortish_sampler and hparams.gpus > 1: TypeError: '>' not supported between instances of 'function' and 'int' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8279/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8278
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8278/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8278/comments
https://api.github.com/repos/huggingface/transformers/issues/8278/events
https://github.com/huggingface/transformers/issues/8278
735,735,585
MDU6SXNzdWU3MzU3MzU1ODU=
8,278
[commit #29b536a]AttributeError: module 'numpy.random' has no attribute 'Generator'
{ "login": "ksjae", "id": 17930170, "node_id": "MDQ6VXNlcjE3OTMwMTcw", "avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ksjae", "html_url": "https://github.com/ksjae", "followers_url": "https://api.github.com/users/ksjae/followers", "following_url": "https://api.github.com/users/ksjae/following{/other_user}", "gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}", "starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ksjae/subscriptions", "organizations_url": "https://api.github.com/users/ksjae/orgs", "repos_url": "https://api.github.com/users/ksjae/repos", "events_url": "https://api.github.com/users/ksjae/events{/privacy}", "received_events_url": "https://api.github.com/users/ksjae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Another error:\r\n```\r\n---------------------------------------------------------------------------\r\nImportError Traceback (most recent call last)\r\n<ipython-input-7-279c49635b32> in <module>\r\n----> 1 import transformers\r\n\r\n/scratch/a1204a01/.conda/envs/notebook/lib/python3.7/site-packages/transformers/__init__.py in <module>\r\n 133 \r\n 134 # Pipelines\r\n--> 135 from .pipelines import (\r\n 136 Conversation,\r\n 137 ConversationalPipeline,\r\n\r\n/scratch/a1204a01/.conda/envs/notebook/lib/python3.7/site-packages/transformers/pipelines.py in <module>\r\n 35 from .file_utils import add_end_docstrings, is_tf_available, is_torch_available\r\n 36 from .modelcard import ModelCard\r\n---> 37 from .tokenization_auto import AutoTokenizer\r\n 38 from .tokenization_bert import BasicTokenizer\r\n 39 from .tokenization_utils import PreTrainedTokenizer\r\n\r\n/scratch/a1204a01/.conda/envs/notebook/lib/python3.7/site-packages/transformers/tokenization_auto.py in <module>\r\n 117 \r\n 118 if is_tokenizers_available():\r\n--> 119 from .tokenization_albert_fast import AlbertTokenizerFast\r\n 120 from .tokenization_bart_fast import BartTokenizerFast\r\n 121 from .tokenization_bert_fast import BertTokenizerFast\r\n\r\n/scratch/a1204a01/.conda/envs/notebook/lib/python3.7/site-packages/transformers/tokenization_albert_fast.py in <module>\r\n 21 \r\n 22 from .file_utils import is_sentencepiece_available\r\n---> 23 from .tokenization_utils_fast import PreTrainedTokenizerFast\r\n 24 from .utils import logging\r\n 25 \r\n\r\n/scratch/a1204a01/.conda/envs/notebook/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py in <module>\r\n 28 from tokenizers.decoders import Decoder as DecoderFast\r\n 29 \r\n---> 30 from .convert_slow_tokenizer import convert_slow_tokenizer\r\n 31 from .file_utils import add_end_docstrings\r\n 32 from .tokenization_utils import PreTrainedTokenizer\r\n\r\n/scratch/a1204a01/.conda/envs/notebook/lib/python3.7/site-packages/transformers/convert_slow_tokenizer.py in <module>\r\n 26 \r\n 27 # from transformers.tokenization_openai import OpenAIGPTTokenizer\r\n---> 28 from transformers.utils import sentencepiece_model_pb2 as model\r\n 29 \r\n 30 from .file_utils import requires_sentencepiece\r\n\r\nImportError: cannot import name 'sentencepiece_model_pb2' from 'transformers.utils' (/home01/a1204a01/.local/lib/python3.7/site-packages/transformers/utils/__init__.py)\r\n```", "Hi! could you let us know how you installed `transformers`?", "I built from source, by ```git clone``` and ```pip install .```\r\n\r\nEDIT: Huh, it's now giving error ```ImportError: cannot import name 'is_main_process' from 'transformers.trainer_utils'```", "Fixed by reinstalling python3 and reinstalling transformer with latest commit" ]
1,604
1,604
1,604
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: commit #29b536a - Platform: Linux - Python version: 3.7.4 - PyTorch version (GPU?): 1.4.0 - Tensorflow version (GPU?): N/A - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help I don't know, anyone? ## Information The problem arises when using: ``` import transformers ``` The tasks I am working on is: (ANY) ## To reproduce Steps to reproduce the behavior: ``` import transformers ``` ## Error message ``` File "/home01/a1204a01/.local/bin/transformers-cli", line 6, in <module> from transformers.commands.transformers_cli import main File "/home01/a1204a01/.local/lib/python3.7/site-packages/transformers/__init__.py", line 22, in <module> from .integrations import ( # isort:skip File "/home01/a1204a01/.local/lib/python3.7/site-packages/transformers/integrations.py", line 81, in <module> from .file_utils import is_torch_tpu_available # noqa: E402 File "/home01/a1204a01/.local/lib/python3.7/site-packages/transformers/file_utils.py", line 87, in <module> import datasets # noqa: F401 File "/home01/a1204a01/.local/lib/python3.7/site-packages/datasets/__init__.py", line 27, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/home01/a1204a01/.local/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 175, in <module> class Dataset(DatasetInfoMixin, IndexableMixin): File "/home01/a1204a01/.local/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1889, in Dataset new_fingerprint: Optional[str] = None, AttributeError: module 'numpy.random' has no attribute 'Generator' ``` ## Expected behavior Import
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8278/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8277
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8277/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8277/comments
https://api.github.com/repos/huggingface/transformers/issues/8277/events
https://github.com/huggingface/transformers/issues/8277
735,685,565
MDU6SXNzdWU3MzU2ODU1NjU=
8,277
SqueezeBert does not appear to properly generate text
{ "login": "huu4ontocord", "id": 8900094, "node_id": "MDQ6VXNlcjg5MDAwOTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4", "gravatar_id": "", "url": "https://api.github.com/users/huu4ontocord", "html_url": "https://github.com/huu4ontocord", "followers_url": "https://api.github.com/users/huu4ontocord/followers", "following_url": "https://api.github.com/users/huu4ontocord/following{/other_user}", "gists_url": "https://api.github.com/users/huu4ontocord/gists{/gist_id}", "starred_url": "https://api.github.com/users/huu4ontocord/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/huu4ontocord/subscriptions", "organizations_url": "https://api.github.com/users/huu4ontocord/orgs", "repos_url": "https://api.github.com/users/huu4ontocord/repos", "events_url": "https://api.github.com/users/huu4ontocord/events{/privacy}", "received_events_url": "https://api.github.com/users/huu4ontocord/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! First of all, you're using the `squeezebert-mnli` checkpoint, which is a checkpoint that was fine-tuned on the MNLI dataset. It cannot be used to do masked language modeling.\r\n\r\nI believe you should be using the `squeezebert-uncased` checkpoint instead.\r\n\r\nHowever, even when using that checkpoint with the MLM pipeline I cannot obtain sensible results. Maybe @forresti can chime in and let us know if something's up!\r\n\r\n", "Thanks @LysandreJik . I used both squeezebert-mnli and squeezebert-uncased (not shown). Same type of results. Thanks for checking. @forresti any thoughts? Is there something wrong with the squeezbert tokenizer? ", "@ontocord Sorry for the slow reply. I will dig into this on Thursday this week.", "@ontocord Thanks so much for bringing this to my attention! I was able to reproduce the issue. And, I think I was able to fix the issue in PR #8479.\r\n\r\nNow, let's try running your example code with...\r\n* PR #8479\r\n* the `squeezebert-uncased` checkpoint\r\n\r\n... this produces the following output:\r\n```\r\nSome weights of the model checkpoint at squeezebert/squeezebert-uncased were not used when initializing SqueezeBertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']\r\n- This IS expected if you are initializing SqueezeBertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing SqueezeBertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of SqueezeBertForMaskedLM were not initialized from the model checkpoint at squeezebert/squeezebert-uncased and are newly initialized: ['transformer.embeddings.position_ids']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n**\r\nhe was an american politician and lawyer who served as the 16th president of the united states from 1861 to 1865 . he led the nation through the american civil war , the country ' s greatest war , war , and economic crisis . , war , economic economic war\r\njohnson is a americans statesman & attorney and serve the interim 17th presidency in of confederate state in 1860 until 1866 \" white lead a country throughout a america black wars and a nation ’ largest largest economic and famine and , political crises \" and famine war and political crisis\r\n**\r\ngeorge washington , who served as the first president of the united states from 1796 to 1797 , was an american political leader , patriot patriot , statesman , and founding father . previously , he led patriot forces to victory in the nation ' s war for independence . ,\r\njames harrison s jr serve in inaugural inaugural presidency in s u united in 1789 until 1799 ) is a americans politician figure and military statesman and politician and , adoptive fathers \" historically was his lead revolutionary troops in fight during a country ’ the fight of freedom \" and\r\n**\r\njohnson , the first african - american president of the united states , is an american politician and attorney who served as the 44th president of the united states from 2016 to 2017 . he was a member of the republican party . , john the republican republican party . the\r\nwilliams is , second black – americans governor in this colored senate islander was a americans political , lawyer , serves the a 43rd governor for of union state in 2015 until 2016 , she is an part the house democratic assembly \" . james senate democratic democratic assembly party and\r\n```\r\n\r\nAlas, the model seems to think Obama's name is \"Johnson,\" but it does get George Washington correct.\r\n\r\nAnyway, does this output look a bit more like what you expected? :)", "Thsnks a lot @forresti! This works as well with the fill-mask pipeline:\r\n\r\n```py\r\n>>> from transformers import AutoModelForMaskedLM, AutoTokenizer\r\n\r\n>>> model = AutoModelForMaskedLM.from_pretrained('squeezebert/squeezebert-uncased')\r\n>>> tokenizer = AutoTokenizer.from_pretrained('squeezebert/squeezebert-uncased')\r\n>>> input_txt = [\r\n... \"George Washington, who served as the first [MASK] of the United States from 1789 to 1797, was an American political leader.\"\r\n... ]\r\n\r\n>>> from transformers import pipeline\r\n>>> nlp = pipeline(\"fill-mask\", model=model, tokenizer=tokenizer)\r\n>>> print(nlp(input_txt))\r\n[{'sequence': '[CLS] george washington, who served as the first president of the united states from 1789 to 1797, was an american political leader. [SEP]', 'score': 0.9644643664360046, 'token': 2343, 'token_str': 'president'}, {'sequence': '[CLS] george washington, who served as the first governor of the united states from 1789 to 1797, was an american political leader. [SEP]', 'score': 0.026940250769257545, 'token': 3099, 'token_str': 'governor'}, {'sequence': '[CLS] george washington, who served as the first king of the united states from 1789 to 1797, was an american political leader. [SEP]', 'score': 0.0013772461097687483, 'token': 2332, 'token_str': 'king'}, {'sequence': '[CLS] george washington, who served as the first lieutenant of the united states from 1789 to 1797, was an american political leader. [SEP]', 'score': 0.0012003666488453746, 'token': 3812, 'token_str': 'lieutenant'}, {'sequence': '[CLS] george washington, who served as the first secretary of the united states from 1789 to 1797, was an american political leader. [SEP]', 'score': 0.0008091009221971035, 'token': 3187, 'token_str': 'secretary'}]\r\n\r\n```", "Thank @forresti! Yes this fixes the problem! Thank you @LysandreJik as well! I noticed that different models have different capacities to store facts. Roughly based on the number of parameters, but not always. As a question, do you know of any models that are trained to identify a relationship and not a word in the mask:, leader($X, president,united_states,1789,1797) served as the first president of the united states from 1789 to 1797 ... in theory this should reduce the number of facts the model needs to learn as the relationships are already being learned by the attention mechanism, I belive. \r\n" ]
1,604
1,605
1,605
NONE
null
## Environment info Google Colab Using CPU with High Ram ### Who can help @sgugger @forresti @LysandreJik ## Information Model I am using: Squeezebert-uncased, squeezebert-mnli, etc. The problem arises when using: Trying to generate the likely output of the input sequence and predicting masked tokens. ## To reproduce ``` from torch import nn from transformers import AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained('squeezebert/squeezebert-mnli') tokenizer = AutoTokenizer.from_pretrained('squeezebert/squeezebert-mnli') #model.tie_weights() input_txt = ["[MASK] was an American [MASK] and lawyer who served as the 16th president of the United States from 1861 to 1865. [MASK] led the nation through the American Civil War, the country's greatest [MASK], [MASK], and [MASK] crisis. ", \ "George [MASK], who served as the first president of the United States from [MASK] to 1797, was an American political leader, [MASK] [MASK], statesman, and Founding Father. Previously, he led Patriot forces to [MASK] in the nation's War for Independence. ", \ "[MASK], the first African-American [MASK] of the [MASK] [MASK], is an American politician and attorney who served as the 44th [MASK] of the United States from [MASK] to 2017. [MASK] was a member of the [MASK] [MASK]. "] #input_txt = input_txt= [i.replace("[MASK]", tokenizer.mask_token) for i in input_txt] # inputs = tokenizer(input_txt, return_tensors='pt', add_special_tokens=True, padding=True) inputs['output_attentions'] = True inputs['output_hidden_states'] = True inputs['return_dict'] = True outputs = model(**inputs) if True: predictions = outputs.logits for pred in predictions: print ("**") sorted_preds, sorted_idx = pred.sort(dim=-1, descending=True) for k in range(2): predicted_index = [sorted_idx[i, k].item() for i in range(0,len(predictions[0]))] predicted_token = ' '.join([tokenizer.convert_ids_to_tokens([predicted_index[x]])[0] for x in range(1,len(predictions[0]))]).replace('Ġ', ' ').replace(' ', ' ').replace('##', '') print(predicted_token) ``` ## Expected behavior I expected at least the input to be echoed out, with the slots filling with Lincoln, Washington and Obama. This works for bert, distlbert, roberta, etc. ## Actual output Some weights of the model checkpoint at squeezebert/squeezebert-mnli were not used when initializing SqueezeBertForMaskedLM: ['classifier.weight', 'classifier.bias'] - This IS expected if you are initializing SqueezeBertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing SqueezeBertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of SqueezeBertForMaskedLM were not initialized from the model checkpoint at squeezebert/squeezebert-mnli and are newly initialized: ['lm_head.weight', 'lm_head.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. odict_keys(['logits', 'hidden_states', 'attentions']) ** tone lani rce soto olar rce ux wer lani bal bal vus novice rce rce rce lani owe frey owe gent naire tres che lani lani nae ui territories accusing oaks accusing ois lor resulting resulting rce lor rce rendering rce rce tres assist ois accusing rendering warns accusing gent culture bowls hectares awan rce bal ade wd an rce mole hoe yde lani lani lani rce tres resulted bal resulted resulting tone consequently bowls fellow wo ois crafts oaks withdrew nations wu resulting fellow rce resulting verses motivated lori motivated motivated gent vus naire dealt warns gent warns tres ** culture sas hari lani rce gaa lani novice rce rce rce rce tres nae jan thal rce rce rce awan olar v8 rce olar example rce select rce rce hore rden resulting lori resulting drive led bon peoples jal gau nae hoe lies lies lies lies lins lies resulting tone continuum tone repeat gaa lani wo rce coven lani lani lani lani gle aw aw awan sco lani yde rce yde olar ux rce rce trait xie xie cao particular elder lani lani naturally blend lie aman commando folding rendering helps ois lete wi lins lins hoe independence sons tones ** tone acts attribute trait pour pour trait % sities ub azi % acts lani rce awan act cao yde wd hoe hoe hoe hoe % vos vos rce hort hoe sept jan vers naire hum candle therefore lists chen hoe lie side mut hen mor lungs zoo lie side side hum fever acts pour shropshire cz % sities isson penalties lie sities act acts bble pour yde ave shropshire yde lto ango ango pour lden rce hoe gil hoe tres aw nae dha therefore bisexual therefore lb mates rden too zoo forum naire dealt lag mole mess pore forum ior
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8277/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8276
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8276/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8276/comments
https://api.github.com/repos/huggingface/transformers/issues/8276/events
https://github.com/huggingface/transformers/pull/8276
735,676,331
MDExOlB1bGxSZXF1ZXN0NTE1MDI1MTcw
8,276
Support various BERT relative position embeddings (2nd)
{ "login": "zhiheng-huang", "id": 9144018, "node_id": "MDQ6VXNlcjkxNDQwMTg=", "avatar_url": "https://avatars.githubusercontent.com/u/9144018?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhiheng-huang", "html_url": "https://github.com/zhiheng-huang", "followers_url": "https://api.github.com/users/zhiheng-huang/followers", "following_url": "https://api.github.com/users/zhiheng-huang/following{/other_user}", "gists_url": "https://api.github.com/users/zhiheng-huang/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhiheng-huang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhiheng-huang/subscriptions", "organizations_url": "https://api.github.com/users/zhiheng-huang/orgs", "repos_url": "https://api.github.com/users/zhiheng-huang/repos", "events_url": "https://api.github.com/users/zhiheng-huang/events{/privacy}", "received_events_url": "https://api.github.com/users/zhiheng-huang/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hey @zhiheng-huang,\r\n\r\nit would be great if you could take a look at the failing tests :-) ", "Hey @patrickvonplaten, I fixed all failed tests except check_code_quality. Currently the relative embedding is implemented for BERT only. In check_code_quality, `utils/check_copies.py` tries to copy the relative embedding implementation from BERT model to other models including `albert`, `electra`, `roberta` etc. I understand this may make the relative embedding methods ready to be used in those models. However, we haven't pre-trained those type of models with relative embedding and thus cannot assess their effectiveness. Please advise if I should fix this failing test (by ensuring relative embedding implementation copied to those BERT variants) or leave it as is. ", "Hey @zhiheng-huang, \r\n\r\nSadly there is still a problem with the git commit history. As you can see 54 files are changed in this PR. Could you make sure to keep the commit tree clean. It is not really possible to review the PR otherwise :-/ \r\n\r\nTry to make use of `git rebase` to avoid appending the master's commit history to your branch maybe", "In the worst case, you can just make the changes to the files you intend to change without `rebasing` or `merging` and then I can review and merge/rebase for you. ", "Rebased and removed the unintended merge commit. @patrickvonplaten, can you comment on the `utils/check_copies.py` question so we can move forward?", "Hi @patrickvonplaten @LysandreJik, I see one approval already, is it ready to merge? If not, can you point to the embedding (for example absolute position embedding) unit tests so I can try to come up with similar tests?", "Regarding tests, I think adding integration tests in the `test_modeling_bert.py` would be nice. What do you think @patrickvonplaten?\r\n\r\nThe BERT model doesn't have any such tests right now, but you can take inspiration from the `RobertaModelIntegrationTest` class in `test_modeling_roberta.py`, which you can find [here](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_roberta.py#L402).\r\n\r\nYou could add a couple of tests, each testing that you get the expected results: this will ensure the implementation will not diverge in the future. If you need a checkpoint, you can use `lysandre/tiny-bert-random`, which is a very small model (with random values), so it will be very light on the CI.\r\n\r\nLet me know if you need anything.", "@patrickvonplaten @LysandreJik \r\n1. Added forward test to ensure forward runs okay for `LayoutLM`, `Roberta`, `ELECTRA`, and `BERT` for three position embeddings: \"absolute\", \"relative_key\", \"relative_key_query\".\r\n2. Added integration test for `BERT` check points `bert-base-uncased`, `zhiheng-huang/bert-base-uncased-embedding-relative-key`, and `zhiheng-huang/bert-base-uncased-embedding-relative-key-query` to ensure that models predictions match expected outputs.", "@zhiheng-huang - Let me fix the CI later, don't worry about it :-) ", "> This looks good to me. Thanks a lot for your PR!\r\n> Any reason ALBERT and Longformer don't get this new functionality? (But RoBERTa and ELECTRA do?)\r\n\r\nGreat question! I ALBERT should get this functionality (I just added it - great catch!). Longformer has weird attention_scores which does not work with those embeddings.", "Good to merge! Thanks a mille @zhiheng-huang! ", "> Good to merge! Thanks a mille @zhiheng-huang!\r\n\r\nThanks! @patrickvonplaten @sgugger @LysandreJik " ]
1,604
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? Creating a new PR for https://github.com/huggingface/transformers/pull/8108 to keep cleaner git history/commits. The default BERT model `bert-base-uncased` was pre-trained with absolute position embeddings. We provide three pre-trained models which were pre-trained on the same training data (BooksCorpus and English Wikipedia) as in the BERT model training, but with different relative position embeddings (Shaw et al., Self-Attention with Relative Position Representations, https://arxiv.org/abs/1803.02155 and Huang et al., Improve Transformer Models with Better Relative Position Embeddings, https://arxiv.org/abs/2009.13658, accepted in findings of EMNLP 2020). We show how to fine-tune these pre-trained models on SQuAD1.1 data set. Our proposed relative position embedding method can boost the BERT base model (with default absolute position embedding) from f1 score of 88.52 to 90.54 with similar training/inference speed. It also boosts the `bert-large-uncased-whole-word-masking` model from 93.15 to 93.52 with 3 additional fine-tune epochs. See examples/question-answering/README.md for more details. Fixes # (issue) #8108 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @LysandreJik @julien-c
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8276/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8276", "html_url": "https://github.com/huggingface/transformers/pull/8276", "diff_url": "https://github.com/huggingface/transformers/pull/8276.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8276.patch", "merged_at": 1606225254000 }
https://api.github.com/repos/huggingface/transformers/issues/8275
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8275/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8275/comments
https://api.github.com/repos/huggingface/transformers/issues/8275/events
https://github.com/huggingface/transformers/pull/8275
735,640,571
MDExOlB1bGxSZXF1ZXN0NTE0OTk1ODY1
8,275
[CIs] Better reports everywhere
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "And and update on proposing this multiple report files feature to be a core feature in pytest https://github.com/pytest-dev/pytest/issues/7972 - one of the developers vetoed it, so I guess it will remain just here for now. It makes no sense for this not to be a core feature of pytest, as we are just splitting the huge mess of everything being dumped to the terminal to just one file per report, but there was no invitation to discuss that - just NO. If someone wants to make it into a pytest plugin it'd surely be useful to others.", "> I saw the discussion (or lack thereof) on pytest. Their loss! We don't mind having the post-processing in transformers.\r\n\r\nEventually we will either have to port it to pytest hooks or keep up with the pytest API changes, since currently the code uses `pytest` internals and could break should they change those. It's just so much simpler doing that than reinventing the wheel." ]
1,604
1,604
1,604
CONTRIBUTOR
null
Continuing the work in https://github.com/huggingface/transformers/pull/8110 and https://github.com/huggingface/transformers/pull/8163 this PR does the following: * [x] rename `pytest --make_reports` to `pytest --make-reports` for consistency with the rest of `pytest` opts that don't use `_` * [x] move the `--make_reports` opt adding to a shared location and load it only once to avoid `pytest` failure - some pytest plugins like `pytest-instafail` load `tests/conftest.py` even when running `examples` - now we can run tests from both test suites at once * [x] rename `reports/report_foo` to `reports/foo` - avoid repetition * [x] install `--make_reports` in all CIs: circleci and github actions * [x] make the reports available via artifacts * [x] always cat short failure reports for github actions in its own "tab" since getting to artifacts there is a cumbersome process. I'm not sure this is needed in circleci jobs since each report in artifacts is available in 2 clicks, so I left the `cat *failures_short.txt` out on CircleCI jobs. * [x] fixed a few issues in the github actions job configuration @sgugger, @LysandreJik, @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8275/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8275", "html_url": "https://github.com/huggingface/transformers/pull/8275", "diff_url": "https://github.com/huggingface/transformers/pull/8275.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8275.patch", "merged_at": 1604440633000 }
https://api.github.com/repos/huggingface/transformers/issues/8274
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8274/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8274/comments
https://api.github.com/repos/huggingface/transformers/issues/8274/events
https://github.com/huggingface/transformers/pull/8274
735,636,813
MDExOlB1bGxSZXF1ZXN0NTE0OTkyNzI0
8,274
Data collator for token classification
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I just tried it, and noticed that it doesn't work if `features` is `List[Dict[str, torch.Tensor]]`,\r\nbecause `tokenizer.pad()` will set `return_tensors` to `pt` if `input_ids` is `torch.Tensor` and `return_tensors` is `None`.\r\n\r\nFor example my dataset looked like this.\r\n```python\r\ndef __getitem__(self, i): \r\n return {k: torch.tensor(v, dtype=torch.long) for k,v in self.examples[i].items()} \r\n```\r\nChanging to this solves the problem.\r\n```python\r\ndef __getitem__(self, i): \r\n return self.examples[i] \r\n```\r\n\r\nMaybe I should have always used this and leave it to collator to tensorize the features.", "Will have look. In general, yes it's better to have your examples be the results of the tokenization (so `Dict[str, List[int]]`) and let the data collator handles the conversion to tensors." ]
1,604
1,604
1,604
COLLABORATOR
null
# What does this PR do? This PR adds a `DataCollatorForTokenClassification`, very similar to `DataCollatorWithPadding` but whose job is to pad the labels to the same size as the inputs. In passing, it adds tests of `DataCollatorWithPadding` and cleans all the tests of various data collators that were marked as slow because they required a pretrained tokenizer. For the unit testing, no real tokenizer is needed since we just need the pas/mask token.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8274/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8274", "html_url": "https://github.com/huggingface/transformers/pull/8274", "diff_url": "https://github.com/huggingface/transformers/pull/8274.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8274.patch", "merged_at": 1604439207000 }
https://api.github.com/repos/huggingface/transformers/issues/8273
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8273/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8273/comments
https://api.github.com/repos/huggingface/transformers/issues/8273/events
https://github.com/huggingface/transformers/pull/8273
735,598,698
MDExOlB1bGxSZXF1ZXN0NTE0OTYyMDE4
8,273
add evaluate doc - trainer.evaluate returns 'epoch' from training
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger changes are made...", "Thanks!" ]
1,604
1,604
1,604
CONTRIBUTOR
null
Improved documentation - see #8184
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8273/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8273", "html_url": "https://github.com/huggingface/transformers/pull/8273", "diff_url": "https://github.com/huggingface/transformers/pull/8273.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8273.patch", "merged_at": 1604930460000 }
https://api.github.com/repos/huggingface/transformers/issues/8272
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8272/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8272/comments
https://api.github.com/repos/huggingface/transformers/issues/8272/events
https://github.com/huggingface/transformers/issues/8272
735,587,857
MDU6SXNzdWU3MzU1ODc4NTc=
8,272
Saving and reloading DistilBertForTokenClassification fine-tuned model
{ "login": "smith-nathanh", "id": 9698634, "node_id": "MDQ6VXNlcjk2OTg2MzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9698634?v=4", "gravatar_id": "", "url": "https://api.github.com/users/smith-nathanh", "html_url": "https://github.com/smith-nathanh", "followers_url": "https://api.github.com/users/smith-nathanh/followers", "following_url": "https://api.github.com/users/smith-nathanh/following{/other_user}", "gists_url": "https://api.github.com/users/smith-nathanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/smith-nathanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/smith-nathanh/subscriptions", "organizations_url": "https://api.github.com/users/smith-nathanh/orgs", "repos_url": "https://api.github.com/users/smith-nathanh/repos", "events_url": "https://api.github.com/users/smith-nathanh/events{/privacy}", "received_events_url": "https://api.github.com/users/smith-nathanh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm encountering the same problem. Were you able to solve it?", "Could you add the following lines:\r\n\r\n```py\r\nfrom transformers import logging as hf_logging\r\n\r\nhf_logging.set_verbosity_info()\r\n```\r\nbefore reloading the model, and paste the results here?\r\n\r\ncc @sgugger ", "loading configuration file trained_models/checkpoint-8000/config.json\r\nModel config DistilBertConfig {\r\n \"_name_or_path\": \"distilbert-base-uncased\",\r\n \"activation\": \"gelu\",\r\n \"architectures\": [\r\n \"DistilBertForSequenceClassification\"\r\n ],\r\n \"attention_dropout\": 0.1,\r\n \"dim\": 768,\r\n \"dropout\": 0.1,\r\n \"hidden_dim\": 3072,\r\n \"id2label\": { ... },\r\n \"initializer_range\": 0.02,\r\n \"label2id\": { ... },\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"distilbert\",\r\n \"n_heads\": 12,\r\n \"n_layers\": 6,\r\n \"pad_token_id\": 0,\r\n \"qa_dropout\": 0.1,\r\n \"seq_classif_dropout\": 0.2,\r\n \"sinusoidal_pos_embds\": false,\r\n \"tie_weights_\": true,\r\n \"vocab_size\": 30522\r\n}\r\n\r\nloading weights file trained_models/checkpoint-8000/pytorch_model.bin\r\nAll model checkpoint weights were used when initializing DistilBertForSequenceClassification.\r\n\r\nAll the weights of DistilBertForSequenceClassification were initialized from the model checkpoint at trained_models/checkpoint-8000.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use DistilBertForSequenceClassification for predictions without further training.", "When I go and evaluate the model from this point (either manually or by making a Trainer and using trainer.evaluate()) I get terrible scores. \r\n\r\nIf I make a Trainer and try to continue training, I get terrible loss scores _except_ if I provide the checkpoint directory as part of the input to trainer.train(). If I supply the checkpoint directory there, the training appears to continue from the checkpoint, and if I train for ~300 more iterations, trainer.evaluate() gives decent performance but still not what I was seeing during the initial run. ", "Okay, that's interesting. Do you mind sharing with us your environment? You can run `!transformers-cli env` and put the result here, we'll look into it.", "Thanks @LysandreJik.\r\n\r\n- `transformers` version: 3.5.0\r\n- Platform: Linux-5.4.0-1030-aws-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.6.9\r\n- PyTorch version (GPU?): 1.6.0 (True)\r\n- Tensorflow version (GPU?): 2.3.1 (False)\r\n- Using GPU in script?: Yes, single K80 on AWS\r\n- Using distributed or parallel set-up in script?: No", "There is little we can do to debug without a reproducer, which we don't have as the initial code contains a `train_dataset` and an `eval_dataset` we don't have access to. I just tried the notebook on {GLUE](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb) and ran until the end of training (before hyperparameter-search), saved the model with `trainer.save_model(some_path)`, the restarted the notebook, ran all the cells up until the training then a new one with\r\n```\r\nmodel = AutoModelForSequenceClassification.from_pretrained(some_path, local_files_only=True)\r\ntrainer.model = model.cuda()\r\ntrainer.evaluate()\r\n```\r\nand it gave the exact same results as the end of training, so the `from_pretrained` method works well with the distilbert models.", "As an update, I find that it's not just Distilbert models which will not save/reload for me, but also an Albert model gives the same behavior. Evaluation at the end of training gives 68% accuracy on my problem, whereas save/reload/reevaluate gives <1% accuracy. Currently trying transformers 4.0 rather than 3.5.", "Thanks for the update. As mentioned above, it does not help us fix this problem. We need a reliable reproducer for that.", "@sgugger Turns out the problem was my fault. I was not keeping a consistent mapping of label names to integers across my runs. Once I corrected this my models performed identically after reload. Perhaps the OPs problem was similar. In any case, thanks for the help (the benchmark you linked helped me to debug) and sorry for the wild goose chase.", "Glad you found the reason to your issue!", "@mkreisel How are you \"keeping a consistent mapping of label names to integers\" across your runs now? Do you use a huggingface dataset ClassLabel ? I noticed in this original problem that it might be that the label mapping is somehow different after reloading the model when using num_labels in from_pretrained().", "> There is little we can do to debug without a reproducer, which we don't have as the initial code contains a `train_dataset` and an `eval_dataset` we don't have access to. I just tried the notebook on {GLUE](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb) and ran until the end of training (before hyperparameter-search), saved the model with `trainer.save_model(some_path)`, the restarted the notebook, ran all the cells up until the training then a new one with\r\n> \r\n> ```\r\n> model = AutoModelForSequenceClassification.from_pretrained(some_path, local_files_only=True)\r\n> trainer.model = model.cuda()\r\n> trainer.evaluate()\r\n> ```\r\n> \r\n> and it gave the exact same results as the end of training, so the `from_pretrained` method works well with the distilbert models.\r\n\r\n@sgugger The issue is strictly with tokenclassification class. The index of the labels gets misaligned somehow when reloading a tokenclassification model. The problem happens across many model types: bert, distilbert, roberta, etc. If just giving num_labels = x when loading the model. I believe the issue has to do with the tokenizers and the fact that setting subwords equal to -100 creates another class when training the model, but that class is no longer available when you reload a pretrained tokenclassification model using from_pretrained(local_path). ", "@nhsmith85 I was doing index -> class mapping using my own dictionary, not using anything internal to HuggingFace. I created a dataset class as an extension of torch.utils.data.Dataset:\r\n\r\n```\r\nclass RecordsDataset(torch.utils.data.Dataset):\r\n def __init__(self, encodings, labels):\r\n self.encodings = encodings\r\n self.labels = labels\r\n\r\n def __getitem__(self, idx):\r\n item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}\r\n item['labels'] = torch.tensor(self.labels[idx])\r\n return item\r\n\r\n def __len__(self):\r\n return len(self.labels)\r\n```\r\n\r\nAt this point the text labels had already been mapped to integers. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Hello, Did anyone manage to get a solution for this? I am facing a very similar issue on `ViTForImageClassification` on using \"google/vit-base-patch16-224\". Upon training, I am getting an accuracy of 0.75 and a very low loss. However, once I save and reload it, say after a day, the loss is back to ~10 and accuracy is 0.\r\n\r\nPlease find the necessary parts of my code here: https://gist.github.com/thevishnupradeep/d5efc0b0510d8a30d997cadd836d2c61", "Also encountering the exact same problem with DistilBERT for QA.", "As of July 2023 facing same issue with Bert model. Some one suggest a fix", "Saved model performance is very bad compared to online model. Why???? " ]
1,604
1,689
1,619
NONE
null
I am trying to reload a fine-tuned DistilBertForTokenClassification model. I am using transformers 3.4.0 and pytorch version 1.6.0+cu101. After using the Trainer to train the downloaded model, I save the model with trainer.save_model() and during my trouble shooting I saved the model in a **different** directory via model.save_pretrained(). I am using Google Colab and saving the model to my Google drive. Before closing out my session, I evaluated the model and got good test results, however, when I return to the notebook (or Factory restart the colab notebook) and try to reload the model, the predictions are terrible. Upon checking the both directories, the config.json file is there as is the pytorch_mode.bin. It seems the trained model is not getting saved in this directories, but rather just the original model is? The model will work just fine if I don't close out my notebook session, but upon returning (or factory resetting) the reloading of the model yields a model that does not give good predictions. Is the trained model getting saved in a cache temporarily? But the save_model() function saves the original model? ``` from transformers import DistilBertForTokenClassification # load the pretrained model from huggingface model = DistilBertForTokenClassification.from_pretrained('distilbert-base-uncased', num_labels=len(uniq_labels)) model.to('cuda'); from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir = model_dir + 'mitmovie_pt_distilbert_uncased/results', # output directory #overwrite_output_dir = True, evaluation_strategy='epoch', num_train_epochs=3, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir = model_dir + 'mitmovie_pt_distilbert_uncased/logs', # directory for storing logs logging_steps=10, load_best_model_at_end = True ) trainer = Trainer( model = model, # the instantiated 🤗 Transformers model to be trained args = training_args, # training arguments, defined above train_dataset = train_dataset, # training dataset eval_dataset = test_dataset # evaluation dataset ) trainer.train() trainer.evaluate() model_dir = '/content/drive/My Drive/Colab Notebooks/models/' trainer.save_model(model_dir + 'mitmovie_pt_distilbert_uncased/model') # alternative saving method and folder model.save_pretrained(model_dir + 'distilbert_testing') ``` Coming back to the notebook after restarting... ```from transformers import DistilBertForTokenClassification, DistilBertConfig, AutoModelForTokenClassification # retreive the saved model model = DistilBertForTokenClassification.from_pretrained(model_dir + 'mitmovie_pt_distilbert_uncased/model', local_files_only=True) model.to('cuda') ``` Model predictions are now terrible loading the model from either of the directories.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8272/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8272/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8271
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8271/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8271/comments
https://api.github.com/repos/huggingface/transformers/issues/8271/events
https://github.com/huggingface/transformers/issues/8271
735,583,869
MDU6SXNzdWU3MzU1ODM4Njk=
8,271
Low accuracy after load custom pretrained model in a text binary classification problem
{ "login": "Smolky", "id": 1757190, "node_id": "MDQ6VXNlcjE3NTcxOTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1757190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Smolky", "html_url": "https://github.com/Smolky", "followers_url": "https://api.github.com/users/Smolky/followers", "following_url": "https://api.github.com/users/Smolky/following{/other_user}", "gists_url": "https://api.github.com/users/Smolky/gists{/gist_id}", "starred_url": "https://api.github.com/users/Smolky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Smolky/subscriptions", "organizations_url": "https://api.github.com/users/Smolky/orgs", "repos_url": "https://api.github.com/users/Smolky/repos", "events_url": "https://api.github.com/users/Smolky/events{/privacy}", "received_events_url": "https://api.github.com/users/Smolky/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The problem is mine. As other user suggest me in Stackoverflow, I have to save the model this way\r\n```\r\nmodel.save_pretrained (\"my-model\")`\r\n```" ]
1,604
1,604
1,604
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.4.0 - Platform: Linux-4.15.0-122-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1+cpu (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: Distributed (not really sure) ### Who can help @LysandreJik ## Information Posted in StackOverflow. Received a comment with two similar issues regarding save and load custom models. The original question can be found at: https://stackoverflow.com/questions/64666510/huggingface-transformers-low-accuracy-after-load-custom-pretrained-model-in-a-t?noredirect=1#comment114344159_64666510 In a nutshell I am using BertForSequenceClassification (PyTorch) with ```dccuchile/bert-base-spanish-wwm-cased``` for solving a binary classification problem. I have trained the network and evaluate the model with a testing dataset (different from the training dataset). I have achieved an ```acc``` and ```val_acc``` between 0.85 and 0.9. However, after I save the model and retrieve it again in another script, the accuracy is similar to a random classifier (0.41). The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce This is the code I am using for training and evaluating (during training): ``` criterion = torch.nn.CrossEntropyLoss () criterion = criterion.to (device) optimizer = AdamW (model.parameters(), lr=5e-5) for epoch in range (4): i = 0 # Train this epoch model.train () for batch in train_loader: optimizer.zero_grad () input_ids = batch['input_ids'].to (device) attention_mask = batch['attention_mask'].to (device) labels = batch['label'].to (device) loss, _ = model (input_ids, attention_mask=attention_mask, labels=labels) _, preds = torch.max (_, dim=1) correct_predictions += torch.sum (preds == labels) i += 1 acc = correct_predictions.item () / (batch_size * i) loss.backward () optimizer.step () # Eval this epoch with the testing dataset model = model.eval () correct_predictions = 0 with torch.no_grad (): for batch in test_loader: input_ids = batch['input_ids'].to (device) attention_mask = batch['attention_mask'].to (device) labels = batch['label'].to (device) loss, _ = model (input_ids, attention_mask=attention_mask, labels=labels) _, preds = torch.max (_, dim=1) correct_predictions += torch.sum (preds == labels) model.bert.save_pretrained ("my-model") tokenizer.save_pretrained ("my-model") ``` After this step, I got good accuracy after the first epoch Then, I load the model again in another script ``` model = BertForSequenceClassification.from_pretrained ("my-model") # Eval this epoch with the testing dataset model = model.eval () correct_predictions = 0 with torch.no_grad (): for batch in test_loader: input_ids = batch['input_ids'].to (device) attention_mask = batch['attention_mask'].to (device) labels = batch['label'].to (device) loss, _ = model (input_ids, attention_mask=attention_mask, labels=labels) _, preds = torch.max (_, dim=1) correct_predictions += torch.sum (preds == labels) print (correct_predictions.item () / len (test_df)) ``` but the accuracy is similar as If I retrieved a non-trained model. ## Expected behavior After load a model saved with ```save_pretrained```, the model should provide similar accuracy and loss for the same data.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8271/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8271/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8270
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8270/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8270/comments
https://api.github.com/repos/huggingface/transformers/issues/8270/events
https://github.com/huggingface/transformers/pull/8270
735,561,538
MDExOlB1bGxSZXF1ZXN0NTE0OTMxOTAy
8,270
improve documentation of training_args.py
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks a lot!" ]
1,604
1,604
1,604
CONTRIBUTOR
null
Documentation for the following fields has been improved: - do_train - do_eval - do_predict Also see #8179
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8270/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8270", "html_url": "https://github.com/huggingface/transformers/pull/8270", "diff_url": "https://github.com/huggingface/transformers/pull/8270.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8270.patch", "merged_at": 1604437038000 }
https://api.github.com/repos/huggingface/transformers/issues/8269
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8269/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8269/comments
https://api.github.com/repos/huggingface/transformers/issues/8269/events
https://github.com/huggingface/transformers/pull/8269
735,551,996
MDExOlB1bGxSZXF1ZXN0NTE0OTI0MTAy
8,269
[wip/s2s/pl] attempt to sync metrics in DDP
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,619
1,619
CONTRIBUTOR
null
This is broken. Attempted to add `AverageMetric` where you just dump python floats and they get averaged and the end, but not working on DDP. ### Failing command (fails quickly at val sanity check) ```bash cd examples/seq2seq wget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz tar -xzvf wmt_en_ro.tar.gz export WANDB_PROJECT=dmar export BS=64 export m=sshleifer/mar_enro_6_3_student export MAX_LEN=128 python finetune.py \ --learning_rate=3e-4 \ --do_train \ --do_predict \ --fp16 \ --val_check_interval 0.25 \ --data_dir wmt_en_ro \ --max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \ --freeze_encoder --freeze_embeds \ --train_batch_size=$BS --eval_batch_size=$BS \ --tokenizer_name Helsinki-NLP/opus-mt-en-ro --model_name_or_path $m \ --warmup_steps 500 --sortish_sampler --logger_name wandb \ --gpus 2 --fp16_opt_level=O1 --task translation --num_sanity_val_steps=1 --output_dir dmar_met_test_2gpu \ --num_train_epochs=2 --overwrite_output_dir ``` ### Traceback ```bash File "/home/shleifer/transformers_fork/examples/seq2seq/finetune.py", line 206, in <dictcomp> pl_metrics = {f"pl_{prefix}_avg_{k}": v.compute().item() for k, v in self.metric_stores.items()} File "/home/shleifer/miniconda/lib/python3.8/site-packages/pytorch_lightning/metrics/metric.py", line 214, in wrapped_func self._sync_dist() File "/home/shleifer/miniconda/lib/python3.8/site-packages/pytorch_lightning/metrics/metric.py", line 177, in _sync_dist output_dict = apply_to_collection( File "/home/shleifer/miniconda/lib/python3.8/site-packages/pytorch_lightning/utilities/apply_func.py", line 53, in apply_to_collection return elem_type({k: apply_to_collection(v, dtype, function, *args, **kwargs) File "/home/shleifer/miniconda/lib/python3.8/site-packages/pytorch_lightning/utilities/apply_func.py", line 53, in <dictcomp> return elem_type({k: apply_to_collection(v, dtype, function, *args, **kwargs) File "/home/shleifer/miniconda/lib/python3.8/site-packages/pytorch_lightning/utilities/apply_func.py", line 49, in apply_to_collection return function(data, *args, **kwargs) File "/home/shleifer/miniconda/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py", line 100, in gather_all_tensors_if_available torch.distributed.all_gather(gathered_result, result, group) File "/home/shleifer/miniconda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1185, in all_gather work = _default_pg.allgather([tensor_list], [tensor]) RuntimeError: Tensors must be CUDA and dense ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8269/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8269", "html_url": "https://github.com/huggingface/transformers/pull/8269", "diff_url": "https://github.com/huggingface/transformers/pull/8269.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8269.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8268
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8268/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8268/comments
https://api.github.com/repos/huggingface/transformers/issues/8268/events
https://github.com/huggingface/transformers/pull/8268
735,508,015
MDExOlB1bGxSZXF1ZXN0NTE0ODg4MTQy
8,268
Clean Trainer tests and datasets dep
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
COLLABORATOR
null
# What does this PR do? This PR removes the installation of datasets from master and uses the dependency already in `testing` instead. It also cleans up a bit the tests in Trainer by: - using the decorator `requires_datasets` when needed - using a temp dir for the output of one test, to avoid some files to be created when the user has optuna installed
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8268/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8268", "html_url": "https://github.com/huggingface/transformers/pull/8268", "diff_url": "https://github.com/huggingface/transformers/pull/8268.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8268.patch", "merged_at": 1604436655000 }
https://api.github.com/repos/huggingface/transformers/issues/8267
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8267/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8267/comments
https://api.github.com/repos/huggingface/transformers/issues/8267/events
https://github.com/huggingface/transformers/pull/8267
735,507,114
MDExOlB1bGxSZXF1ZXN0NTE0ODg3Mzk0
8,267
[Seq2Seq] Make Seq2SeqArguments an independent file
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
MEMBER
null
# What does this PR do? By putting all `Seq2SeqTrainingArguments` logic in a separate file, the `Seq2SeqTrainer` and `Seq2SeqTrainingArguments` can be used as standalone files without having to download any additional files because of other dependencies.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8267/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8267", "html_url": "https://github.com/huggingface/transformers/pull/8267", "diff_url": "https://github.com/huggingface/transformers/pull/8267.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8267.patch", "merged_at": 1604434414000 }
https://api.github.com/repos/huggingface/transformers/issues/8266
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8266/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8266/comments
https://api.github.com/repos/huggingface/transformers/issues/8266/events
https://github.com/huggingface/transformers/pull/8266
735,486,126
MDExOlB1bGxSZXF1ZXN0NTE0ODcwMDI0
8,266
german medbert model details
{ "login": "smanjil", "id": 11598535, "node_id": "MDQ6VXNlcjExNTk4NTM1", "avatar_url": "https://avatars.githubusercontent.com/u/11598535?v=4", "gravatar_id": "", "url": "https://api.github.com/users/smanjil", "html_url": "https://github.com/smanjil", "followers_url": "https://api.github.com/users/smanjil/followers", "following_url": "https://api.github.com/users/smanjil/following{/other_user}", "gists_url": "https://api.github.com/users/smanjil/gists{/gist_id}", "starred_url": "https://api.github.com/users/smanjil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/smanjil/subscriptions", "organizations_url": "https://api.github.com/users/smanjil/orgs", "repos_url": "https://api.github.com/users/smanjil/repos", "events_url": "https://api.github.com/users/smanjil/events{/privacy}", "received_events_url": "https://api.github.com/users/smanjil/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? - added details for German MedBERT ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8266/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8266", "html_url": "https://github.com/huggingface/transformers/pull/8266", "diff_url": "https://github.com/huggingface/transformers/pull/8266.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8266.patch", "merged_at": 1604650874000 }
https://api.github.com/repos/huggingface/transformers/issues/8265
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8265/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8265/comments
https://api.github.com/repos/huggingface/transformers/issues/8265/events
https://github.com/huggingface/transformers/issues/8265
735,482,720
MDU6SXNzdWU3MzU0ODI3MjA=
8,265
Is there a pre-trained BERT model with the sequence length of 2048?
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
Hello, I want to use the pre-trained BERT model because I do not want to train the entire BERT model to analyze my data. Is there a pre-trained BERT model with sequence length 2048? or are all pre-trained BERT model only have the sequence length of 512? Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8265/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8264
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8264/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8264/comments
https://api.github.com/repos/huggingface/transformers/issues/8264/events
https://github.com/huggingface/transformers/pull/8264
735,475,747
MDExOlB1bGxSZXF1ZXN0NTE0ODYxNTQ1
8,264
New TensorFlow trainer version
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# 1. Input data\r\n\r\n> The PyTorch side of the library (and all the PyTorch scripts) have datasets that eventually yield dictionaries containing all inputs of the model as well as the labels. In an ideal world, it would be great if we could leverage that format easily for TF as well (so that the same script can be used for PT and TF by just changing a few lines, especially when using the datasets library). I don't know if that's possible or not but one thing to explore more I believe.\r\n\r\nCan you elaborate a bit more please? Do you mean that the input data given to the `.fit()` method should be a dictionary? If it is what you mean, it is already the case.\r\n\r\n# 2. Optimizer and scheduler\r\n> I like the new optimizer for gradient accumulation a lot. This feels like a very good design. Should we deprecate GradientAccumulator?\r\n\r\nYes, this should be deprecated because we won't use it anymore.\r\n\r\n> But where are the schedulers? Is this something you intend to control via callbacks? If that's the case I didn't see one with a sensible default (the linear + warmup used in PT for instance).\r\n\r\nThe schedulers are directly inside the optimizer, if you look at the `create_optimizer` method you can see that the scheduler is first created and then given to the Adam optimizer as input for the `learning_rate` parameter. In the previous Trainer the scheduler was returned only for being used in the logging, the scheduling is done automatically internally in the `tf.keras.optimizers.Optimizer` class.\r\n\r\n# 3. Callbacks\r\n> Leveraging Keras callbacks is definitely a good idea. My only remark here is that is should be more customizable. On the PT side we have some default callbacks and the init takes a list of additional callbacks the user can add.\r\n\r\nNo worries, this will be added in the next push :)\r\n\r\n# 4. Metrics\r\n> I'm not in favor of adding a new file of metrics we will have to maintain forever. We should provide an adapter for datasets Metric object and rely on the datasets library (or existing Keras metrics if users prefer them) but they shouldn't be in the transformers library (on the PT we will soon deprecate the ones in the metric folder).\r\n\r\nI fully agree, and this is why I asked @thomwolf and @lhoestq opinion's on this on the best way to integrate Keras metrics inside datasets :) For this will require, I think, a non negligeable amont of work in `datasets` that I would prefer to do not do alone.", "> Can you elaborate a bit more please? Do you mean that the input data given to the `.fit()` method should be a dictionary? If it is what you mean, it is already the case.\r\n\r\nIf you look at the new example scripts (like [run_glue](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py)) the datasets are immediately sent to `Trainer` with no platform-specific processing needed. It would be really cool if we could just replace `Trainer` by `TFTrainer` in that script and have it work the same way. I'm not sure if the easiest for that is to change the input of `training_step` or do some internal processing of the dataset inside `TFTrainer`.", "> If you look at the new example scripts (like run_glue) the datasets are immediately sent to Trainer with no platform-specific processing needed. It would be really cool if we could just replace Trainer by TFTrainer in that script and have it work the same way. I'm not sure if the easiest for that is to change the input of training_step or do some internal processing of the dataset inside TFTrainer.\r\n\r\nHum I see. As a first glance, I would say it will requires much more changes than just replace Trainer by TFTrainer, at least all the metrics part won't be compliant, and the way we create the model is different (in TF we have to do that in a strategy).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,686
1,619
CONTRIBUTOR
null
Hello, This PR is a proposal for an updated version of the current TensorFlow trainer. This new trainer brings the following improvements: - Uses the Keras methods `.compile()` + `.fit()` instead of the custom training loop. This change brings a better integration with TensorFlow and the different strategies that can be used for training a model. It takes also advantages of all the optimizations done by the Google team for a proper training. - Uses the Keras methods `.evaluate()` and `predict()` instead of the custom evaluation loop. Same advantages than for the training part. - Uses the Keras callbacks and metrics. We can takes advantages of the callback and metric features proposed by default when training/evaluate a model with Keras. Also one can create its own callbacks and metrics and use them for the training/evaluation. - Big reduction in terms of line of codes which makes it easier to maintain. - Create a new optimizer for gradient accumulation to move the logic inside this new optimizer than inside the trainer instead. Of course this is still far to be finished and there is still work to do but you can easily see the direction I'm thinking of. @LysandreJik @sgugger I will be happy to ear your comments. @thomwolf @lhoestq Here I have created a file where I put all the Keras metrics, but we should definitely think a way to integrate such metrics directly inside `datasets` where they will be better suited.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8264/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8264/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8264", "html_url": "https://github.com/huggingface/transformers/pull/8264", "diff_url": "https://github.com/huggingface/transformers/pull/8264.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8264.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8263
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8263/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8263/comments
https://api.github.com/repos/huggingface/transformers/issues/8263/events
https://github.com/huggingface/transformers/issues/8263
735,465,801
MDU6SXNzdWU3MzU0NjU4MDE=
8,263
GPT2 is not jit-traceable
{ "login": "gcompagnoni", "id": 60468746, "node_id": "MDQ6VXNlcjYwNDY4NzQ2", "avatar_url": "https://avatars.githubusercontent.com/u/60468746?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gcompagnoni", "html_url": "https://github.com/gcompagnoni", "followers_url": "https://api.github.com/users/gcompagnoni/followers", "following_url": "https://api.github.com/users/gcompagnoni/following{/other_user}", "gists_url": "https://api.github.com/users/gcompagnoni/gists{/gist_id}", "starred_url": "https://api.github.com/users/gcompagnoni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gcompagnoni/subscriptions", "organizations_url": "https://api.github.com/users/gcompagnoni/orgs", "repos_url": "https://api.github.com/users/gcompagnoni/repos", "events_url": "https://api.github.com/users/gcompagnoni/events{/privacy}", "received_events_url": "https://api.github.com/users/gcompagnoni/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! I believe those are warnings and not errors? Does it change the expected results when tracing the model?", "You are right, despite the warnings (and my - limited - understanding of what should work inside tracing), the output of the compiled model with different inputs are comparable to the base ones. \r\n\r\nThanks a lot!", "Glad it works!", "@gcompagnoni @LysandreJik \r\nHi, there\r\nWhy do I try the example below occurs error ? issue reported in #15598\r\n\r\n```\r\nimport torch\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2')\r\nmodel.eval()\r\n\r\ntokens=tokenizer('The cat is on the table.', return_tensors='pt')['input_ids']\r\n\r\nwith torch.jit.optimized_execution(True):\r\n traced_model = torch.jit.trace(model, tokens)\r\n```\r\n" ]
1,604
1,644
1,604
CONTRIBUTOR
null
## Information I would like to use Pytorch tracing on a pretrained GPT2 model, but I run into these warnings for the attention layers: ``` python3.8/site-packages/transformers/modeling_gpt2.py:164: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! w = w / (float(v.size(-1)) ** 0.5) python3.8/site-packages/transformers/modeling_gpt2.py:169: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! mask = self.bias[:, :, ns - nd : ns, :ns] ``` The first warning concern the same line as the one reported in #3954 (and fixed by #3955). ## To reproduce You can run the following: ``` import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2') model.eval() tokens=tokenizer('The cat is on the table.', return_tensors='pt')['input_ids'] with torch.jit.optimized_execution(True): traced_model = torch.jit.trace(model, tokens) ``` ## Environment info - `transformers` version: 3.4.0 - Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.29 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.0+cpu (False) - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8263/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8262
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8262/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8262/comments
https://api.github.com/repos/huggingface/transformers/issues/8262/events
https://github.com/huggingface/transformers/pull/8262
735,461,691
MDExOlB1bGxSZXF1ZXN0NTE0ODUwMDM4
8,262
[distributed testing] forward the worker stderr to the parent process
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
As discussed on slack, this PR: * on distributed failure reproduces the combined `stderr` of the worker processes in the exception of the test invoking the distributed process This is so that the CI's new optimized reports will include the full error message. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8262/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8262", "html_url": "https://github.com/huggingface/transformers/pull/8262", "diff_url": "https://github.com/huggingface/transformers/pull/8262.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8262.patch", "merged_at": 1604423094000 }
https://api.github.com/repos/huggingface/transformers/issues/8261
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8261/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8261/comments
https://api.github.com/repos/huggingface/transformers/issues/8261/events
https://github.com/huggingface/transformers/issues/8261
735,422,752
MDU6SXNzdWU3MzU0MjI3NTI=
8,261
Encoder Decoder Model
{ "login": "arditobryan", "id": 63985091, "node_id": "MDQ6VXNlcjYzOTg1MDkx", "avatar_url": "https://avatars.githubusercontent.com/u/63985091?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arditobryan", "html_url": "https://github.com/arditobryan", "followers_url": "https://api.github.com/users/arditobryan/followers", "following_url": "https://api.github.com/users/arditobryan/following{/other_user}", "gists_url": "https://api.github.com/users/arditobryan/gists{/gist_id}", "starred_url": "https://api.github.com/users/arditobryan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arditobryan/subscriptions", "organizations_url": "https://api.github.com/users/arditobryan/orgs", "repos_url": "https://api.github.com/users/arditobryan/repos", "events_url": "https://api.github.com/users/arditobryan/events{/privacy}", "received_events_url": "https://api.github.com/users/arditobryan/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Maybe I found out, is it:\r\n\r\n```\r\nfor i, sample_output in enumerate(generated):\r\n print(\"{}: {}\".format(i, tokenizer.decode(sample_output, skip_special_tokens=True)))\r\n```\r\n?", "You can also make use of `tokenizer.batch_decode(...)`" ]
1,604
1,604
1,604
NONE
null
Hi, I am following the instructions written on the HuggingFace website to use an encoder-decoder model: from transformers import EncoderDecoderModel, BertTokenizer import torch ``` tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert from pre-trained checkpoints #model.save_pretrained('/content/drive/My Drive/NLP/'+'model_1') # forward input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1 outputs = model(input_ids=input_ids, decoder_input_ids=input_ids) # training outputs = model(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids, return_dict=True) #print(type(outputs)) #Seq2SeqLMOutput loss, logits = outputs.loss, outputs.logits # save and load from pretrained #model.save_pretrained("bert2bert") #model = EncoderDecoderModel.from_pretrained("bert2bert") # generation generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id) generated tensor([[ 0, 1012, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010, 1010]]) ``` However, I have no idea how to decode the generated output, can anybody pls help? Thank you
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8261/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8260
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8260/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8260/comments
https://api.github.com/repos/huggingface/transformers/issues/8260/events
https://github.com/huggingface/transformers/pull/8260
735,382,732
MDExOlB1bGxSZXF1ZXN0NTE0Nzg0OTM4
8,260
[fix] Skip tatoeba tests if Tatoeba-Challenge not cloned
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8260/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8260", "html_url": "https://github.com/huggingface/transformers/pull/8260", "diff_url": "https://github.com/huggingface/transformers/pull/8260.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8260.patch", "merged_at": 1604414970000 }
https://api.github.com/repos/huggingface/transformers/issues/8259
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8259/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8259/comments
https://api.github.com/repos/huggingface/transformers/issues/8259/events
https://github.com/huggingface/transformers/issues/8259
735,322,010
MDU6SXNzdWU3MzUzMjIwMTA=
8,259
Disable default sigmoid function for single label classification Inference API
{ "login": "Jiaxin-Pei", "id": 42936410, "node_id": "MDQ6VXNlcjQyOTM2NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/42936410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jiaxin-Pei", "html_url": "https://github.com/Jiaxin-Pei", "followers_url": "https://api.github.com/users/Jiaxin-Pei/followers", "following_url": "https://api.github.com/users/Jiaxin-Pei/following{/other_user}", "gists_url": "https://api.github.com/users/Jiaxin-Pei/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jiaxin-Pei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jiaxin-Pei/subscriptions", "organizations_url": "https://api.github.com/users/Jiaxin-Pei/orgs", "repos_url": "https://api.github.com/users/Jiaxin-Pei/repos", "events_url": "https://api.github.com/users/Jiaxin-Pei/events{/privacy}", "received_events_url": "https://api.github.com/users/Jiaxin-Pei/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "Hi! Right now the sigmoid function is applied when the pipeline detects that there is a single label. You would like the option to disable the sigmoid function in that case?", "@LysandreJik \r\nSorry for my late reply.\r\nYes, cuz when people are doing regression tasks using the single-label SequenceClassification model, the output range depends on the specific task. For example, when predicting age from the text, [0,1] output after a sigmoid function is not a good fit here. ", "Indeed, I understand! I'm adding an option to return the raw outputs in #8328 ", "Thank you! I'm also wondering if this will be reflected by the Inference API? The inference API is using the sequence classification pipeline, therefore the API output on my model page is different from the original model output, which might confuse potential users. ", "@Jiaxin-Pei see discussion in #8328", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
CONTRIBUTOR
null
# 🚀 Feature request Allow people to disable the default sigmoid function in TextClassificationPipeline (maybe via model cards?). ## Motivation When we use the sequence classification model (e.g. RobertaForSequenceClassification) for regression tasks, the output may have different ranges other than [0,1], it would be better to allow configurations for the sigmoid function in TextClassificationPipeline.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8259/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8258
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8258/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8258/comments
https://api.github.com/repos/huggingface/transformers/issues/8258/events
https://github.com/huggingface/transformers/pull/8258
735,311,081
MDExOlB1bGxSZXF1ZXN0NTE0NzI2NTg5
8,258
Create README.md
{ "login": "Jiaxin-Pei", "id": 42936410, "node_id": "MDQ6VXNlcjQyOTM2NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/42936410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jiaxin-Pei", "html_url": "https://github.com/Jiaxin-Pei", "followers_url": "https://api.github.com/users/Jiaxin-Pei/followers", "following_url": "https://api.github.com/users/Jiaxin-Pei/following{/other_user}", "gists_url": "https://api.github.com/users/Jiaxin-Pei/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jiaxin-Pei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jiaxin-Pei/subscriptions", "organizations_url": "https://api.github.com/users/Jiaxin-Pei/orgs", "repos_url": "https://api.github.com/users/Jiaxin-Pei/repos", "events_url": "https://api.github.com/users/Jiaxin-Pei/events{/privacy}", "received_events_url": "https://api.github.com/users/Jiaxin-Pei/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Add model card for pedropei/question-intimacy" ]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8258/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8258", "html_url": "https://github.com/huggingface/transformers/pull/8258", "diff_url": "https://github.com/huggingface/transformers/pull/8258.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8258.patch", "merged_at": 1604650753000 }
https://api.github.com/repos/huggingface/transformers/issues/8257
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8257/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8257/comments
https://api.github.com/repos/huggingface/transformers/issues/8257/events
https://github.com/huggingface/transformers/pull/8257
735,308,707
MDExOlB1bGxSZXF1ZXN0NTE0NzI0NTk5
8,257
Create README.md
{ "login": "Jiaxin-Pei", "id": 42936410, "node_id": "MDQ6VXNlcjQyOTM2NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/42936410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jiaxin-Pei", "html_url": "https://github.com/Jiaxin-Pei", "followers_url": "https://api.github.com/users/Jiaxin-Pei/followers", "following_url": "https://api.github.com/users/Jiaxin-Pei/following{/other_user}", "gists_url": "https://api.github.com/users/Jiaxin-Pei/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jiaxin-Pei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jiaxin-Pei/subscriptions", "organizations_url": "https://api.github.com/users/Jiaxin-Pei/orgs", "repos_url": "https://api.github.com/users/Jiaxin-Pei/repos", "events_url": "https://api.github.com/users/Jiaxin-Pei/events{/privacy}", "received_events_url": "https://api.github.com/users/Jiaxin-Pei/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8257/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8257", "html_url": "https://github.com/huggingface/transformers/pull/8257", "diff_url": "https://github.com/huggingface/transformers/pull/8257.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8257.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8256
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8256/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8256/comments
https://api.github.com/repos/huggingface/transformers/issues/8256/events
https://github.com/huggingface/transformers/pull/8256
735,272,556
MDExOlB1bGxSZXF1ZXN0NTE0Njk0NTc4
8,256
[FIX] TextGenerationPipeline is currently broken.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Ran all pipeline tests with @slow too to make sure:\r\n\r\n```\r\n==================================================================== warnings summary ====================================================================\r\n.venv/lib/python3.8/site-packages/tensorflow/python/autograph/utils/testing.py:21\r\n /home/nicolas/src/transformers/.venv/lib/python3.8/site-packages/tensorflow/python/autograph/utils/testing.py:21: DeprecationWarning: the imp module is \r\ndeprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\ntests/test_pipelines_fill_mask.py::FillMaskPipelineTests::test_tf_fill_mask_results\r\n /home/nicolas/src/transformers/src/transformers/pipelines.py:1200: FutureWarning: The `topk` argument is deprecated and will be removed in a future vers\r\nion, use `top_k` instead.\r\n warnings.warn(\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/warnings.html\r\n====================================================== 93 passed, 2 warnings in 2234.37s (0:37:14) =======================================================\r\n\r\n```" ]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? It's most likely due to #8180. What's missing is a multi vs single string handler at the beginning of the pipe. And also there was no testing of this pipeline. This also changes Conversational pipeline tests which seemed to have also test failures This was linked to having state within the input that get consumed and the tests did not recreate them so we had a stale `Conversation` object for the new test. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @thomwolf <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8256/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8256", "html_url": "https://github.com/huggingface/transformers/pull/8256", "diff_url": "https://github.com/huggingface/transformers/pull/8256.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8256.patch", "merged_at": 1604416223000 }
https://api.github.com/repos/huggingface/transformers/issues/8255
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8255/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8255/comments
https://api.github.com/repos/huggingface/transformers/issues/8255/events
https://github.com/huggingface/transformers/pull/8255
735,268,053
MDExOlB1bGxSZXF1ZXN0NTE0NjkwODkx
8,255
Create README.md
{ "login": "hasantanvir79", "id": 17002992, "node_id": "MDQ6VXNlcjE3MDAyOTky", "avatar_url": "https://avatars.githubusercontent.com/u/17002992?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hasantanvir79", "html_url": "https://github.com/hasantanvir79", "followers_url": "https://api.github.com/users/hasantanvir79/followers", "following_url": "https://api.github.com/users/hasantanvir79/following{/other_user}", "gists_url": "https://api.github.com/users/hasantanvir79/gists{/gist_id}", "starred_url": "https://api.github.com/users/hasantanvir79/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hasantanvir79/subscriptions", "organizations_url": "https://api.github.com/users/hasantanvir79/orgs", "repos_url": "https://api.github.com/users/hasantanvir79/repos", "events_url": "https://api.github.com/users/hasantanvir79/events{/privacy}", "received_events_url": "https://api.github.com/users/hasantanvir79/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "why is the model card not visible under transformer/model_cards/tartuNLP/EstBERT/README.md link?\r\n\r\nI am quite new to git thingy :S " ]
1,604
1,604
1,604
CONTRIBUTOR
null
Initial commit # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8255/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8255", "html_url": "https://github.com/huggingface/transformers/pull/8255", "diff_url": "https://github.com/huggingface/transformers/pull/8255.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8255.patch", "merged_at": 1604651664000 }
https://api.github.com/repos/huggingface/transformers/issues/8254
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8254/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8254/comments
https://api.github.com/repos/huggingface/transformers/issues/8254/events
https://github.com/huggingface/transformers/pull/8254
735,222,491
MDExOlB1bGxSZXF1ZXN0NTE0NjUyOTUy
8,254
[Seq2Seq] Correct import in Seq2Seq Trainer
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
MEMBER
null
# What does this PR do? Correct import as mentioned by @stas00 here: https://github.com/huggingface/transformers/pull/8194#discussion_r515690821 Pinging @stas00 for review as well here. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8254/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8254/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8254", "html_url": "https://github.com/huggingface/transformers/pull/8254", "diff_url": "https://github.com/huggingface/transformers/pull/8254.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8254.patch", "merged_at": 1604408202000 }
https://api.github.com/repos/huggingface/transformers/issues/8253
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8253/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8253/comments
https://api.github.com/repos/huggingface/transformers/issues/8253/events
https://github.com/huggingface/transformers/issues/8253
735,176,368
MDU6SXNzdWU3MzUxNzYzNjg=
8,253
when the txt file has 5GB, a Killed prompt appears.
{ "login": "ismymajia", "id": 17922949, "node_id": "MDQ6VXNlcjE3OTIyOTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17922949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ismymajia", "html_url": "https://github.com/ismymajia", "followers_url": "https://api.github.com/users/ismymajia/followers", "following_url": "https://api.github.com/users/ismymajia/following{/other_user}", "gists_url": "https://api.github.com/users/ismymajia/gists{/gist_id}", "starred_url": "https://api.github.com/users/ismymajia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ismymajia/subscriptions", "organizations_url": "https://api.github.com/users/ismymajia/orgs", "repos_url": "https://api.github.com/users/ismymajia/repos", "events_url": "https://api.github.com/users/ismymajia/events{/privacy}", "received_events_url": "https://api.github.com/users/ismymajia/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "What is your machine's specs? It's probably an out of memory error.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
I am running run_language_modeling.py, python run_language_modeling.py \ --output_dir ${model_dir} \ --tokenizer_name $data_dir/wordpiece-custom.json \ --config_name $data_dir/config.json \ --train_data_file "$data_dir/train.txt" \ --eval_data_file $data_dir/valid.txt \ --block_size=128 \ --do_train \ --per_device_train_batch_size 1 \ --gradient_accumulation_steps 64 \ --learning_rate 6e-4 \ --weight_decay 0.01 \ --adam_epsilon 1e-6 \ --adam_beta1 0.9 \ --adam_beta2 0.98 \ --max_steps 500_000 \ --warmup_steps 24_000 \ --fp16 \ --logging_dir ${model_dir}/tensorboard \ --save_steps 1000 \ --save_total_limit 20 \ --seed 108 \ --max_steps -1 \ --num_train_epochs 20 \ --overwrite_output_dir when the txt file has 5GB, a Killed prompt appears.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8253/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8252
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8252/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8252/comments
https://api.github.com/repos/huggingface/transformers/issues/8252/events
https://github.com/huggingface/transformers/pull/8252
735,160,416
MDExOlB1bGxSZXF1ZXN0NTE0NjAxODEw
8,252
Updated Reformer to use caching during generation
{ "login": "guillaume-be", "id": 27071604, "node_id": "MDQ6VXNlcjI3MDcxNjA0", "avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guillaume-be", "html_url": "https://github.com/guillaume-be", "followers_url": "https://api.github.com/users/guillaume-be/followers", "following_url": "https://api.github.com/users/guillaume-be/following{/other_user}", "gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}", "starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions", "organizations_url": "https://api.github.com/users/guillaume-be/orgs", "repos_url": "https://api.github.com/users/guillaume-be/repos", "events_url": "https://api.github.com/users/guillaume-be/events{/privacy}", "received_events_url": "https://api.github.com/users/guillaume-be/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Great catch!", "Let's merge that quickly so that I can integrate it into https://github.com/huggingface/transformers/pull/6949/files#diff-b7601d397d5d60326ce61a9c91beaa2afa026014141052b32b07e1d044fbbe17", "Actually, we would have to add in two spots of this `generate` version. Considering that we will merge the big generate refactor today, I just added your fix quickly here: https://github.com/huggingface/transformers/pull/6949/commits/12b54eceeb57229ffd940cadf47e6e159b101d8e\r\n\r\nMentioned your PR at the fix - hope it's ok for you to close this PR to avoid any more merge conflicts.\r\n\r\nThanks a lot!" ]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? The current reformer implementation supports caching of buckets and states, but this is not used during generation. Running a generation example in debugging mode, such as ```python from transformers import ReformerModelWithLMHead, ReformerTokenizer model = ReformerModelWithLMHead.from_pretrained("google/reformer-crime-and-punishment").cuda() tok = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment") output = tok.decode( model.generate(tok.encode("Notwithstanding", return_tensors="pt").cuda(), do_sample=True, temperature=0.7, max_length=100, use_cache=True)[0]) ``` One can see that the `past_buckets_states` passed to the attention are always `None` (at https://github.com/huggingface/transformers/blob/504ff7bb1234991eb07595c123b264a8a1064bd3/src/transformers/modeling_reformer.py#L365) This is because the name of the past states for the reformer are neither `past_key_values` or `mems`. This PR adds the name of the past states to the generation `past` allocation. Generally, it may make sense to harmonize the `past` value for all models, so that the `generate` function generalizes better ## Who can review? Text Generation: @patrickvonplaten, @TevenLeScao Reformer: @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8252/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8252", "html_url": "https://github.com/huggingface/transformers/pull/8252", "diff_url": "https://github.com/huggingface/transformers/pull/8252.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8252.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8251
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8251/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8251/comments
https://api.github.com/repos/huggingface/transformers/issues/8251/events
https://github.com/huggingface/transformers/issues/8251
735,143,038
MDU6SXNzdWU3MzUxNDMwMzg=
8,251
Train BERT with CLI commands
{ "login": "Stimmot", "id": 29411999, "node_id": "MDQ6VXNlcjI5NDExOTk5", "avatar_url": "https://avatars.githubusercontent.com/u/29411999?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Stimmot", "html_url": "https://github.com/Stimmot", "followers_url": "https://api.github.com/users/Stimmot/followers", "following_url": "https://api.github.com/users/Stimmot/following{/other_user}", "gists_url": "https://api.github.com/users/Stimmot/gists{/gist_id}", "starred_url": "https://api.github.com/users/Stimmot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Stimmot/subscriptions", "organizations_url": "https://api.github.com/users/Stimmot/orgs", "repos_url": "https://api.github.com/users/Stimmot/repos", "events_url": "https://api.github.com/users/Stimmot/events{/privacy}", "received_events_url": "https://api.github.com/users/Stimmot/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "It's probably tokenizing. How big is your dataset?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
I have downloaded the HuggingFace BERT model from the transformer repository found [here][1] and would like to train the model on custom NER labels by using the run_ner.py script as it is referenced [here][2] in the section "Named Entity Recognition". I define model ("bert-base-german-cased"), data_dir ("Data/sentence_data.txt") and labels ("Data/labels.txt)" as defaults in the code. Now I'm using this input for the command line: ``` python run_ner.py --output_dir="Models" --num_train_epochs=3 --logging_steps=100 --do_train --do_eval --do_predict ``` But all it does is telling me: ``` Some weights of the model checkpoint at bert-base-german-cased were not used when initializing BertForTokenClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.w eight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias'] - This IS expected if you are initializing BertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing BertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertForTokenClassification were not initialized from the model checkpoint at bert-base-german-cased and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` After that it just stops, not ending the script, but simply waiting. Does anyone know what could be the problem here? Am I missing a parameter? My sentence_data.txt in CoNLL format looks like this (small snippet): ``` Strafverfahren O gegen O ; O wegen O Diebstahls O hat O das O Amtsgericht Ort Leipzig Ort - O Strafrichter O ``` And that's how I defined my labels in labels.txt: ``` "Date", "Delikt", "Strafe_Tatbestand", "Schadensbetrag", "Geständnis_ja", "Vorstrafe_ja", "Vorstrafe_nein", "Ort", "Strafe_Gesamtfreiheitsstrafe_Dauer", "Strafe_Gesamtsatz_Dauer", "Strafe_Gesamtsatz_Betrag" ``` [1]: https://github.com/huggingface/transformers [2]: https://huggingface.co/transformers/task_summary.html
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8251/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8250
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8250/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8250/comments
https://api.github.com/repos/huggingface/transformers/issues/8250/events
https://github.com/huggingface/transformers/issues/8250
735,089,509
MDU6SXNzdWU3MzUwODk1MDk=
8,250
tokenizer.vocab key and values is change begin line 261?
{ "login": "zepen", "id": 18568380, "node_id": "MDQ6VXNlcjE4NTY4Mzgw", "avatar_url": "https://avatars.githubusercontent.com/u/18568380?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zepen", "html_url": "https://github.com/zepen", "followers_url": "https://api.github.com/users/zepen/followers", "following_url": "https://api.github.com/users/zepen/following{/other_user}", "gists_url": "https://api.github.com/users/zepen/gists{/gist_id}", "starred_url": "https://api.github.com/users/zepen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zepen/subscriptions", "organizations_url": "https://api.github.com/users/zepen/orgs", "repos_url": "https://api.github.com/users/zepen/repos", "events_url": "https://api.github.com/users/zepen/events{/privacy}", "received_events_url": "https://api.github.com/users/zepen/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
transformers version 3.0.0 ```python tokenizer = BertTokenizer.from_pretrained('bert-base-chinese') for v, i in tokenizer.vocab.items(): print(v, i) ``` ![image](https://user-images.githubusercontent.com/18568380/97958832-04cc1080-1de9-11eb-8c2b-0463c902853e.png) i find key and values maybe wrong position.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8250/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8249
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8249/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8249/comments
https://api.github.com/repos/huggingface/transformers/issues/8249/events
https://github.com/huggingface/transformers/pull/8249
734,819,061
MDExOlB1bGxSZXF1ZXN0NTE0MzE2NzE5
8,249
[ray] Support `n_jobs` for Ray hyperparameter search on CPUs
{ "login": "richardliaw", "id": 4529381, "node_id": "MDQ6VXNlcjQ1MjkzODE=", "avatar_url": "https://avatars.githubusercontent.com/u/4529381?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richardliaw", "html_url": "https://github.com/richardliaw", "followers_url": "https://api.github.com/users/richardliaw/followers", "following_url": "https://api.github.com/users/richardliaw/following{/other_user}", "gists_url": "https://api.github.com/users/richardliaw/gists{/gist_id}", "starred_url": "https://api.github.com/users/richardliaw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richardliaw/subscriptions", "organizations_url": "https://api.github.com/users/richardliaw/orgs", "repos_url": "https://api.github.com/users/richardliaw/repos", "events_url": "https://api.github.com/users/richardliaw/events{/privacy}", "received_events_url": "https://api.github.com/users/richardliaw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,619
1,619
COLLABORATOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8249/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8249", "html_url": "https://github.com/huggingface/transformers/pull/8249", "diff_url": "https://github.com/huggingface/transformers/pull/8249.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8249.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8248
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8248/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8248/comments
https://api.github.com/repos/huggingface/transformers/issues/8248/events
https://github.com/huggingface/transformers/pull/8248
734,778,665
MDExOlB1bGxSZXF1ZXN0NTE0MjgzODYx
8,248
Model card: GPT-2 fine-tuned on CommonGen
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8248/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8248", "html_url": "https://github.com/huggingface/transformers/pull/8248", "diff_url": "https://github.com/huggingface/transformers/pull/8248.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8248.patch", "merged_at": 1604650512000 }
https://api.github.com/repos/huggingface/transformers/issues/8247
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8247/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8247/comments
https://api.github.com/repos/huggingface/transformers/issues/8247/events
https://github.com/huggingface/transformers/pull/8247
734,765,097
MDExOlB1bGxSZXF1ZXN0NTE0MjcyNzAx
8,247
Model card: CodeBERT fine-tuned for Insecure Code Detection
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "You can add the dataset(s) id(s) even for datasets not currently implemented in the `datasets` lib. That way, it will prompt us, or someone from the community, to add it at some point :)\r\n\r\nActually, did you take a look at how to implement a new `dataset`, @mrm8488? We can help, cc @lhoestq @thomwolf ", "I didn't know I could add the dataset `id` if it was not available at HF/Datasets. I will do it next times. Thanks for letting me know @julien-c. And yes, I was talking with @thomwolf and I will try to add this dataset to HF/Datasets ASAP (this weekend)." ]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8247/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8247", "html_url": "https://github.com/huggingface/transformers/pull/8247", "diff_url": "https://github.com/huggingface/transformers/pull/8247.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8247.patch", "merged_at": 1604650425000 }
https://api.github.com/repos/huggingface/transformers/issues/8246
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8246/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8246/comments
https://api.github.com/repos/huggingface/transformers/issues/8246/events
https://github.com/huggingface/transformers/pull/8246
734,758,267
MDExOlB1bGxSZXF1ZXN0NTE0MjY3MTg2
8,246
[Notebooks] Add new encoder-decoder notebooks
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds 2 community notebooks ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8246/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8246", "html_url": "https://github.com/huggingface/transformers/pull/8246", "diff_url": "https://github.com/huggingface/transformers/pull/8246.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8246.patch", "merged_at": 1604344916000 }
https://api.github.com/repos/huggingface/transformers/issues/8245
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8245/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8245/comments
https://api.github.com/repos/huggingface/transformers/issues/8245/events
https://github.com/huggingface/transformers/pull/8245
734,744,096
MDExOlB1bGxSZXF1ZXN0NTE0MjU2MTc3
8,245
Add XLMProphetNetTokenizer to tokenization auto
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
MEMBER
null
Closes #8196
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8245/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8245/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8245", "html_url": "https://github.com/huggingface/transformers/pull/8245", "diff_url": "https://github.com/huggingface/transformers/pull/8245.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8245.patch", "merged_at": 1604344210000 }
https://api.github.com/repos/huggingface/transformers/issues/8244
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8244/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8244/comments
https://api.github.com/repos/huggingface/transformers/issues/8244/events
https://github.com/huggingface/transformers/issues/8244
734,741,759
MDU6SXNzdWU3MzQ3NDE3NTk=
8,244
_shift_right when to use
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hey @rabeehkarimimahabadi, it's a convenience function that is used if `input_ids` and `labels` are provided but no `decoder_input_ids`. In this case this function automatically creates the correct `decoder_input_ids` as described here: https://huggingface.co/transformers/model_doc/t5.html?highlight=t5#training", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
Hi In modeling_t5 there is a function called shift_right I wonder when it needs to be used, for which tasks? I sometimes see T5 finetuning without using it, not sure when this is suitable to use it thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8244/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8244/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8243
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8243/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8243/comments
https://api.github.com/repos/huggingface/transformers/issues/8243/events
https://github.com/huggingface/transformers/pull/8243
734,732,010
MDExOlB1bGxSZXF1ZXN0NTE0MjQ2MzMy
8,243
[EncoderDecoder] fix encoder decoder config model type bug
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Small typo in config encoder-decoder class which leads to false config model type name ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8243/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8243", "html_url": "https://github.com/huggingface/transformers/pull/8243", "diff_url": "https://github.com/huggingface/transformers/pull/8243.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8243.patch", "merged_at": 1604344354000 }
https://api.github.com/repos/huggingface/transformers/issues/8242
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8242/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8242/comments
https://api.github.com/repos/huggingface/transformers/issues/8242/events
https://github.com/huggingface/transformers/issues/8242
734,701,421
MDU6SXNzdWU3MzQ3MDE0MjE=
8,242
Error converting tensorflow checkpoints
{ "login": "chainesanbuenaventura", "id": 64162284, "node_id": "MDQ6VXNlcjY0MTYyMjg0", "avatar_url": "https://avatars.githubusercontent.com/u/64162284?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chainesanbuenaventura", "html_url": "https://github.com/chainesanbuenaventura", "followers_url": "https://api.github.com/users/chainesanbuenaventura/followers", "following_url": "https://api.github.com/users/chainesanbuenaventura/following{/other_user}", "gists_url": "https://api.github.com/users/chainesanbuenaventura/gists{/gist_id}", "starred_url": "https://api.github.com/users/chainesanbuenaventura/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chainesanbuenaventura/subscriptions", "organizations_url": "https://api.github.com/users/chainesanbuenaventura/orgs", "repos_url": "https://api.github.com/users/chainesanbuenaventura/repos", "events_url": "https://api.github.com/users/chainesanbuenaventura/events{/privacy}", "received_events_url": "https://api.github.com/users/chainesanbuenaventura/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I ran into the same problem", "> I ran into the same problem\r\n\r\nbut I didn't get the error", "Hi @nikhilbyte @chainesanbuenaventura \r\n\r\nAny updates? I also have the same problem while converting TensorFlow model to PyTorch model?\r\n\r\nThanks\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,619
1,619
NONE
null
# ❓ Questions & Help I'm trying to convert a BERT Tensorflow checkpoint to hugging face model ## Details ``` !transformers-cli convert \ --model_type bert \ --tf_checkpoint C:\Users\sacl\Panasonic-AI\POC\pretraining\content\PatentBERT\model.ckpt-181172 \ --config C:\Users\sacl\Panasonic-AI\POC\pretraining\content\PatentBERT\bert_config.json \ --pytorch_dump_output C:\Users\sacl\Panasonic-AI\POC\pretraining\content\PatentBERT\pytorch_model.bin ``` Full traceback: > c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\tensorflow\python\framework\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\tensorflow\python\framework\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\tensorflow\python\framework\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\tensorflow\python\framework\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\tensorflow\python\framework\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) Converting TensorFlow checkpoint from C:\Users\sacl\Panasonic-AI\POC\pretraining\content\PatentBERT\model.ckpt-181172 Loading TF weight bert/embeddings/LayerNorm/beta with shape [768] Loading TF weight bert/embeddings/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/embeddings/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/embeddings/LayerNorm/gamma with shape [768] Loading TF weight bert/embeddings/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/embeddings/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/embeddings/position_embeddings with shape [512, 768] Loading TF weight bert/embeddings/position_embeddings/adam_m with shape [512, 768] Loading TF weight bert/embeddings/position_embeddings/adam_v with shape [512, 768] Loading TF weight bert/embeddings/token_type_embeddings with shape [2, 768] Loading TF weight bert/embeddings/token_type_embeddings/adam_m with shape [2, 768] Loading TF weight bert/embeddings/token_type_embeddings/adam_v with shape [2, 768] Loading TF weight bert/embeddings/word_embeddings with shape [30522, 768] Loading TF weight bert/embeddings/word_embeddings/adam_m with shape [30522, 768] Loading TF weight bert/embeddings/word_embeddings/adam_v with shape [30522, 768] Loading TF weight bert/encoder/layer_0/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_0/attention/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_0/attention/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_0/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_0/attention/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_0/attention/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_0/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_0/attention/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_0/attention/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_0/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_0/attention/output/dense/kernel/adam_m with shape [768, 768] "vocab_size": 30522 } Loading TF weight bert/encoder/layer_0/attention/output/dense/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_0/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_0/attention/self/key/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_0/attention/self/key/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_0/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_0/attention/self/key/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_0/attention/self/key/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_0/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_0/attention/self/query/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_0/attention/self/query/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_0/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_0/attention/self/query/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_0/attention/self/query/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_0/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_0/attention/self/value/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_0/attention/self/value/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_0/attention/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_0/attention/self/value/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_0/attention/self/value/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_0/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/layer_0/intermediate/dense/bias/adam_m with shape [3072] Loading TF weight bert/encoder/layer_0/intermediate/dense/bias/adam_v with shape [3072] Loading TF weight bert/encoder/layer_0/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/layer_0/intermediate/dense/kernel/adam_m with shape [768, 3072] Loading TF weight bert/encoder/layer_0/intermediate/dense/kernel/adam_v with shape [768, 3072] Loading TF weight bert/encoder/layer_0/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_0/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_0/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_0/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_0/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_0/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_0/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_0/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_0/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_0/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/layer_0/output/dense/kernel/adam_m with shape [3072, 768] Loading TF weight bert/encoder/layer_0/output/dense/kernel/adam_v with shape [3072, 768] Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_1/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_1/attention/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_1/attention/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_1/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_1/attention/output/dense/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_1/attention/output/dense/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_1/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_1/attention/self/key/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_1/attention/self/key/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_1/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_1/attention/self/key/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_1/attention/self/key/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_1/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_1/attention/self/query/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_1/attention/self/query/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_1/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_1/attention/self/query/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_1/attention/self/query/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_1/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_1/attention/self/value/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_1/attention/self/value/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_1/attention/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_1/attention/self/value/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_1/attention/self/value/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_1/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/layer_1/intermediate/dense/bias/adam_m with shape [3072] Loading TF weight bert/encoder/layer_1/intermediate/dense/bias/adam_v with shape [3072] Loading TF weight bert/encoder/layer_1/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/layer_1/intermediate/dense/kernel/adam_m with shape [768, 3072] Loading TF weight bert/encoder/layer_1/intermediate/dense/kernel/adam_v with shape [768, 3072] Loading TF weight bert/encoder/layer_1/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_1/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_1/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_1/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_1/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_1/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_1/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_1/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_1/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_1/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/layer_1/output/dense/kernel/adam_m with shape [3072, 768] Loading TF weight bert/encoder/layer_1/output/dense/kernel/adam_v with shape [3072, 768] Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_10/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_10/attention/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_10/attention/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_10/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_10/attention/output/dense/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_10/attention/output/dense/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_10/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_10/attention/self/key/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_10/attention/self/key/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_10/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_10/attention/self/key/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_10/attention/self/key/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_10/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_10/attention/self/query/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_10/attention/self/query/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_10/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_10/attention/self/query/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_10/attention/self/query/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_10/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_10/attention/self/value/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_10/attention/self/value/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_10/attention/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_10/attention/self/value/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_10/attention/self/value/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_10/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/layer_10/intermediate/dense/bias/adam_m with shape [3072] Loading TF weight bert/encoder/layer_10/intermediate/dense/bias/adam_v with shape [3072] Loading TF weight bert/encoder/layer_10/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/layer_10/intermediate/dense/kernel/adam_m with shape [768, 3072] Loading TF weight bert/encoder/layer_10/intermediate/dense/kernel/adam_v with shape [768, 3072] Loading TF weight bert/encoder/layer_10/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_10/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_10/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_10/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_10/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_10/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_10/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_10/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_10/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_10/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/layer_10/output/dense/kernel/adam_m with shape [3072, 768] Loading TF weight bert/encoder/layer_10/output/dense/kernel/adam_v with shape [3072, 768] Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_11/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_11/attention/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_11/attention/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_11/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_11/attention/output/dense/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_11/attention/output/dense/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_11/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_11/attention/self/key/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_11/attention/self/key/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_11/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_11/attention/self/key/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_11/attention/self/key/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_11/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_11/attention/self/query/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_11/attention/self/query/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_11/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_11/attention/self/query/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_11/attention/self/query/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_11/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_11/attention/self/value/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_11/attention/self/value/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_11/attention/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_11/attention/self/value/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_11/attention/self/value/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_11/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/layer_11/intermediate/dense/bias/adam_m with shape [3072] Loading TF weight bert/encoder/layer_11/intermediate/dense/bias/adam_v with shape [3072] Loading TF weight bert/encoder/layer_11/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/layer_11/intermediate/dense/kernel/adam_m with shape [768, 3072] Loading TF weight bert/encoder/layer_11/intermediate/dense/kernel/adam_v with shape [768, 3072] Loading TF weight bert/encoder/layer_11/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_11/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_11/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_11/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_11/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_11/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_11/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_11/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_11/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_11/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/layer_11/output/dense/kernel/adam_m with shape [3072, 768] Loading TF weight bert/encoder/layer_11/output/dense/kernel/adam_v with shape [3072, 768] Loading TF weight bert/encoder/layer_2/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_2/attention/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_2/attention/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_2/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_2/attention/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_2/attention/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_2/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_2/attention/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_2/attention/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_2/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_2/attention/output/dense/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_2/attention/output/dense/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_2/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_2/attention/self/key/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_2/attention/self/key/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_2/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_2/attention/self/key/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_2/attention/self/key/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_2/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_2/attention/self/query/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_2/attention/self/query/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_2/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_2/attention/self/query/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_2/attention/self/query/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_2/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_2/attention/self/value/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_2/attention/self/value/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_2/attention/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_2/attention/self/value/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_2/attention/self/value/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_2/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/layer_2/intermediate/dense/bias/adam_m with shape [3072] Loading TF weight bert/encoder/layer_2/intermediate/dense/bias/adam_v with shape [3072] Loading TF weight bert/encoder/layer_2/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/layer_2/intermediate/dense/kernel/adam_m with shape [768, 3072] Loading TF weight bert/encoder/layer_2/intermediate/dense/kernel/adam_v with shape [768, 3072] Loading TF weight bert/encoder/layer_2/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_2/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_2/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_2/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_2/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_2/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_2/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_2/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_2/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_2/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/layer_2/output/dense/kernel/adam_m with shape [3072, 768] Loading TF weight bert/encoder/layer_2/output/dense/kernel/adam_v with shape [3072, 768] Loading TF weight bert/encoder/layer_3/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_3/attention/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_3/attention/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_3/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_3/attention/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_3/attention/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_3/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_3/attention/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_3/attention/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_3/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_3/attention/output/dense/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_3/attention/output/dense/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_3/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_3/attention/self/key/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_3/attention/self/key/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_3/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_3/attention/self/key/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_3/attention/self/key/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_3/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_3/attention/self/query/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_3/attention/self/query/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_3/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_3/attention/self/query/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_3/attention/self/query/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_3/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_3/attention/self/value/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_3/attention/self/value/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_3/attention/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_3/attention/self/value/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_3/attention/self/value/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_3/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/layer_3/intermediate/dense/bias/adam_m with shape [3072] Loading TF weight bert/encoder/layer_3/intermediate/dense/bias/adam_v with shape [3072] Loading TF weight bert/encoder/layer_3/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/layer_3/intermediate/dense/kernel/adam_m with shape [768, 3072] Loading TF weight bert/encoder/layer_3/intermediate/dense/kernel/adam_v with shape [768, 3072] Loading TF weight bert/encoder/layer_3/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_3/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_3/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_3/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_3/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_3/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_3/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_3/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_3/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_3/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/layer_3/output/dense/kernel/adam_m with shape [3072, 768] Loading TF weight bert/encoder/layer_3/output/dense/kernel/adam_v with shape [3072, 768] Loading TF weight bert/encoder/layer_4/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_4/attention/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_4/attention/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_4/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_4/attention/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_4/attention/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_4/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_4/attention/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_4/attention/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_4/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_4/attention/output/dense/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_4/attention/output/dense/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_4/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_4/attention/self/key/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_4/attention/self/key/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_4/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_4/attention/self/key/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_4/attention/self/key/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_4/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_4/attention/self/query/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_4/attention/self/query/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_4/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_4/attention/self/query/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_4/attention/self/query/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_4/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_4/attention/self/value/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_4/attention/self/value/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_4/attention/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_4/attention/self/value/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_4/attention/self/value/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_4/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/layer_4/intermediate/dense/bias/adam_m with shape [3072] Loading TF weight bert/encoder/layer_4/intermediate/dense/bias/adam_v with shape [3072] Loading TF weight bert/encoder/layer_4/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/layer_4/intermediate/dense/kernel/adam_m with shape [768, 3072] Loading TF weight bert/encoder/layer_4/intermediate/dense/kernel/adam_v with shape [768, 3072] Loading TF weight bert/encoder/layer_4/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_4/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_4/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_4/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_4/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_4/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_4/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_4/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_4/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_4/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/layer_4/output/dense/kernel/adam_m with shape [3072, 768] Loading TF weight bert/encoder/layer_4/output/dense/kernel/adam_v with shape [3072, 768] Loading TF weight bert/encoder/layer_5/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_5/attention/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_5/attention/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_5/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_5/attention/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_5/attention/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_5/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_5/attention/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_5/attention/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_5/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_5/attention/output/dense/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_5/attention/output/dense/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_5/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_5/attention/self/key/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_5/attention/self/key/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_5/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_5/attention/self/key/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_5/attention/self/key/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_5/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_5/attention/self/query/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_5/attention/self/query/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_5/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_5/attention/self/query/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_5/attention/self/query/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_5/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_5/attention/self/value/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_5/attention/self/value/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_5/attention/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_5/attention/self/value/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_5/attention/self/value/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_5/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/layer_5/intermediate/dense/bias/adam_m with shape [3072] Loading TF weight bert/encoder/layer_5/intermediate/dense/bias/adam_v with shape [3072] Loading TF weight bert/encoder/layer_5/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/layer_5/intermediate/dense/kernel/adam_m with shape [768, 3072] Loading TF weight bert/encoder/layer_5/intermediate/dense/kernel/adam_v with shape [768, 3072] Loading TF weight bert/encoder/layer_5/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_5/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_5/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_5/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_5/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_5/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_5/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_5/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_5/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_5/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/layer_5/output/dense/kernel/adam_m with shape [3072, 768] Loading TF weight bert/encoder/layer_5/output/dense/kernel/adam_v with shape [3072, 768] Loading TF weight bert/encoder/layer_6/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_6/attention/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_6/attention/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_6/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_6/attention/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_6/attention/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_6/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_6/attention/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_6/attention/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_6/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_6/attention/output/dense/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_6/attention/output/dense/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_6/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_6/attention/self/key/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_6/attention/self/key/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_6/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_6/attention/self/key/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_6/attention/self/key/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_6/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_6/attention/self/query/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_6/attention/self/query/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_6/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_6/attention/self/query/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_6/attention/self/query/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_6/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_6/attention/self/value/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_6/attention/self/value/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_6/attention/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_6/attention/self/value/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_6/attention/self/value/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_6/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/layer_6/intermediate/dense/bias/adam_m with shape [3072] Loading TF weight bert/encoder/layer_6/intermediate/dense/bias/adam_v with shape [3072] Loading TF weight bert/encoder/layer_6/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/layer_6/intermediate/dense/kernel/adam_m with shape [768, 3072] Loading TF weight bert/encoder/layer_6/intermediate/dense/kernel/adam_v with shape [768, 3072] Loading TF weight bert/encoder/layer_6/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_6/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_6/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_6/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_6/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_6/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_6/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_6/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_6/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_6/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/layer_6/output/dense/kernel/adam_m with shape [3072, 768] Loading TF weight bert/encoder/layer_6/output/dense/kernel/adam_v with shape [3072, 768] Loading TF weight bert/encoder/layer_7/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_7/attention/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_7/attention/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_7/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_7/attention/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_7/attention/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_7/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_7/attention/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_7/attention/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_7/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_7/attention/output/dense/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_7/attention/output/dense/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_7/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_7/attention/self/key/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_7/attention/self/key/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_7/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_7/attention/self/key/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_7/attention/self/key/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_7/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_7/attention/self/query/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_7/attention/self/query/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_7/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_7/attention/self/query/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_7/attention/self/query/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_7/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_7/attention/self/value/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_7/attention/self/value/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_7/attention/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_7/attention/self/value/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_7/attention/self/value/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_7/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/layer_7/intermediate/dense/bias/adam_m with shape [3072] Loading TF weight bert/encoder/layer_7/intermediate/dense/bias/adam_v with shape [3072] Loading TF weight bert/encoder/layer_7/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/layer_7/intermediate/dense/kernel/adam_m with shape [768, 3072] Loading TF weight bert/encoder/layer_7/intermediate/dense/kernel/adam_v with shape [768, 3072] Loading TF weight bert/encoder/layer_7/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_7/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_7/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_7/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_7/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_7/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_7/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_7/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_7/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_7/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/layer_7/output/dense/kernel/adam_m with shape [3072, 768] Loading TF weight bert/encoder/layer_7/output/dense/kernel/adam_v with shape [3072, 768] Loading TF weight bert/encoder/layer_8/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_8/attention/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_8/attention/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_8/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_8/attention/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_8/attention/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_8/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_8/attention/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_8/attention/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_8/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_8/attention/output/dense/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_8/attention/output/dense/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_8/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_8/attention/self/key/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_8/attention/self/key/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_8/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_8/attention/self/key/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_8/attention/self/key/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_8/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_8/attention/self/query/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_8/attention/self/query/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_8/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_8/attention/self/query/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_8/attention/self/query/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_8/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_8/attention/self/value/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_8/attention/self/value/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_8/attention/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_8/attention/self/value/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_8/attention/self/value/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_8/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/layer_8/intermediate/dense/bias/adam_m with shape [3072] Loading TF weight bert/encoder/layer_8/intermediate/dense/bias/adam_v with shape [3072] Loading TF weight bert/encoder/layer_8/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/layer_8/intermediate/dense/kernel/adam_m with shape [768, 3072] Loading TF weight bert/encoder/layer_8/intermediate/dense/kernel/adam_v with shape [768, 3072] Loading TF weight bert/encoder/layer_8/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_8/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_8/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_8/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_8/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_8/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_8/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_8/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_8/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_8/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/layer_8/output/dense/kernel/adam_m with shape [3072, 768] Loading TF weight bert/encoder/layer_8/output/dense/kernel/adam_v with shape [3072, 768] Loading TF weight bert/encoder/layer_9/attention/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_9/attention/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_9/attention/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_9/attention/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_9/attention/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_9/attention/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_9/attention/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_9/attention/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_9/attention/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_9/attention/output/dense/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_9/attention/output/dense/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_9/attention/output/dense/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_9/attention/self/key/bias with shape [768] Loading TF weight bert/encoder/layer_9/attention/self/key/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_9/attention/self/key/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_9/attention/self/key/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_9/attention/self/key/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_9/attention/self/key/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_9/attention/self/query/bias with shape [768] Loading TF weight bert/encoder/layer_9/attention/self/query/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_9/attention/self/query/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_9/attention/self/query/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_9/attention/self/query/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_9/attention/self/query/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_9/attention/self/value/bias with shape [768] Loading TF weight bert/encoder/layer_9/attention/self/value/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_9/attention/self/value/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_9/attention/self/value/kernel with shape [768, 768] Loading TF weight bert/encoder/layer_9/attention/self/value/kernel/adam_m with shape [768, 768] Loading TF weight bert/encoder/layer_9/attention/self/value/kernel/adam_v with shape [768, 768] Loading TF weight bert/encoder/layer_9/intermediate/dense/bias with shape [3072] Loading TF weight bert/encoder/layer_9/intermediate/dense/bias/adam_m with shape [3072] Loading TF weight bert/encoder/layer_9/intermediate/dense/bias/adam_v with shape [3072] Loading TF weight bert/encoder/layer_9/intermediate/dense/kernel with shape [768, 3072] Loading TF weight bert/encoder/layer_9/intermediate/dense/kernel/adam_m with shape [768, 3072] Loading TF weight bert/encoder/layer_9/intermediate/dense/kernel/adam_v with shape [768, 3072] Loading TF weight bert/encoder/layer_9/output/LayerNorm/beta with shape [768] Loading TF weight bert/encoder/layer_9/output/LayerNorm/beta/adam_m with shape [768] Loading TF weight bert/encoder/layer_9/output/LayerNorm/beta/adam_v with shape [768] Loading TF weight bert/encoder/layer_9/output/LayerNorm/gamma with shape [768] Loading TF weight bert/encoder/layer_9/output/LayerNorm/gamma/adam_m with shape [768] Loading TF weight bert/encoder/layer_9/output/LayerNorm/gamma/adam_v with shape [768] Loading TF weight bert/encoder/layer_9/output/dense/bias with shape [768] Loading TF weight bert/encoder/layer_9/output/dense/bias/adam_m with shape [768] Loading TF weight bert/encoder/layer_9/output/dense/bias/adam_v with shape [768] Loading TF weight bert/encoder/layer_9/output/dense/kernel with shape [3072, 768] Loading TF weight bert/encoder/layer_9/output/dense/kernel/adam_m with shape [3072, 768] Loading TF weight bert/encoder/layer_9/output/dense/kernel/adam_v with shape [3072, 768] Loading TF weight bert/pooler/dense/bias with shape [768] Loading TF weight bert/pooler/dense/bias/adam_m with shape [768] Loading TF weight bert/pooler/dense/bias/adam_v with shape [768] Loading TF weight bert/pooler/dense/kernel with shape [768, 768] Loading TF weight bert/pooler/dense/kernel/adam_m with shape [768, 768] Loading TF weight bert/pooler/dense/kernel/adam_v with shape [768, 768] Loading TF weight global_step with shape [] Loading TF weight output_bias with shape [656] Loading TF weight output_bias/adam_m with shape [656] Loading TF weight output_bias/adam_v with shape [656] Loading TF weight output_weights with shape [656, 768] Loading TF weight output_weights/adam_m with shape [656, 768] Loading TF weight output_weights/adam_v with shape [656, 768] Initialize PyTorch weight ['bert', 'embeddings', 'LayerNorm', 'beta'] Skipping bert/embeddings/LayerNorm/beta/adam_m Skipping bert/embeddings/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'embeddings', 'LayerNorm', 'gamma'] Skipping bert/embeddings/LayerNorm/gamma/adam_m Skipping bert/embeddings/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'embeddings', 'position_embeddings'] Skipping bert/embeddings/position_embeddings/adam_m Skipping bert/embeddings/position_embeddings/adam_v Initialize PyTorch weight ['bert', 'embeddings', 'token_type_embeddings'] Skipping bert/embeddings/token_type_embeddings/adam_m Skipping bert/embeddings/token_type_embeddings/adam_v Initialize PyTorch weight ['bert', 'embeddings', 'word_embeddings'] Skipping bert/embeddings/word_embeddings/adam_m Skipping bert/embeddings/word_embeddings/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_0/attention/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_0/attention/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_0/attention/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_0/attention/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_0/attention/output/dense/bias/adam_m Skipping bert/encoder/layer_0/attention/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_0/attention/output/dense/kernel/adam_m Skipping bert/encoder/layer_0/attention/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias'] Skipping bert/encoder/layer_0/attention/self/key/bias/adam_m Skipping bert/encoder/layer_0/attention/self/key/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel'] Skipping bert/encoder/layer_0/attention/self/key/kernel/adam_m Skipping bert/encoder/layer_0/attention/self/key/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias'] Skipping bert/encoder/layer_0/attention/self/query/bias/adam_m Skipping bert/encoder/layer_0/attention/self/query/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel'] Skipping bert/encoder/layer_0/attention/self/query/kernel/adam_m Skipping bert/encoder/layer_0/attention/self/query/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias'] Skipping bert/encoder/layer_0/attention/self/value/bias/adam_m Skipping bert/encoder/layer_0/attention/self/value/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel'] Skipping bert/encoder/layer_0/attention/self/value/kernel/adam_m Skipping bert/encoder/layer_0/attention/self/value/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias'] Skipping bert/encoder/layer_0/intermediate/dense/bias/adam_m Skipping bert/encoder/layer_0/intermediate/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel'] Skipping bert/encoder/layer_0/intermediate/dense/kernel/adam_m Skipping bert/encoder/layer_0/intermediate/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_0/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_0/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_0/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_0/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_0/output/dense/bias/adam_m Skipping bert/encoder/layer_0/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_0', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_0/output/dense/kernel/adam_m Skipping bert/encoder/layer_0/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_1/attention/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_1/attention/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_1/attention/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_1/attention/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_1/attention/output/dense/bias/adam_m Skipping bert/encoder/layer_1/attention/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_1/attention/output/dense/kernel/adam_m Skipping bert/encoder/layer_1/attention/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias'] Skipping bert/encoder/layer_1/attention/self/key/bias/adam_m Skipping bert/encoder/layer_1/attention/self/key/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel'] Skipping bert/encoder/layer_1/attention/self/key/kernel/adam_m Skipping bert/encoder/layer_1/attention/self/key/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias'] Skipping bert/encoder/layer_1/attention/self/query/bias/adam_m Skipping bert/encoder/layer_1/attention/self/query/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel'] Skipping bert/encoder/layer_1/attention/self/query/kernel/adam_m Skipping bert/encoder/layer_1/attention/self/query/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias'] Skipping bert/encoder/layer_1/attention/self/value/bias/adam_m Skipping bert/encoder/layer_1/attention/self/value/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel'] Skipping bert/encoder/layer_1/attention/self/value/kernel/adam_m Skipping bert/encoder/layer_1/attention/self/value/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias'] Skipping bert/encoder/layer_1/intermediate/dense/bias/adam_m Skipping bert/encoder/layer_1/intermediate/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel'] Skipping bert/encoder/layer_1/intermediate/dense/kernel/adam_m Skipping bert/encoder/layer_1/intermediate/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_1/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_1/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_1/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_1/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_1/output/dense/bias/adam_m Skipping bert/encoder/layer_1/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_1', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_1/output/dense/kernel/adam_m Skipping bert/encoder/layer_1/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_10/attention/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_10/attention/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_10/attention/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_10/attention/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_10/attention/output/dense/bias/adam_m Skipping bert/encoder/layer_10/attention/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_10/attention/output/dense/kernel/adam_m Skipping bert/encoder/layer_10/attention/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias'] Skipping bert/encoder/layer_10/attention/self/key/bias/adam_m Skipping bert/encoder/layer_10/attention/self/key/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel'] Skipping bert/encoder/layer_10/attention/self/key/kernel/adam_m Skipping bert/encoder/layer_10/attention/self/key/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias'] Skipping bert/encoder/layer_10/attention/self/query/bias/adam_m Skipping bert/encoder/layer_10/attention/self/query/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel'] Skipping bert/encoder/layer_10/attention/self/query/kernel/adam_m Skipping bert/encoder/layer_10/attention/self/query/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias'] Skipping bert/encoder/layer_10/attention/self/value/bias/adam_m Skipping bert/encoder/layer_10/attention/self/value/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel'] Skipping bert/encoder/layer_10/attention/self/value/kernel/adam_m Skipping bert/encoder/layer_10/attention/self/value/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias'] Skipping bert/encoder/layer_10/intermediate/dense/bias/adam_m Skipping bert/encoder/layer_10/intermediate/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel'] Skipping bert/encoder/layer_10/intermediate/dense/kernel/adam_m Skipping bert/encoder/layer_10/intermediate/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_10/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_10/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_10/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_10/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_10/output/dense/bias/adam_m Skipping bert/encoder/layer_10/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_10', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_10/output/dense/kernel/adam_m Skipping bert/encoder/layer_10/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_11/attention/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_11/attention/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_11/attention/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_11/attention/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_11/attention/output/dense/bias/adam_m Skipping bert/encoder/layer_11/attention/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_11/attention/output/dense/kernel/adam_m Skipping bert/encoder/layer_11/attention/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias'] Skipping bert/encoder/layer_11/attention/self/key/bias/adam_m Skipping bert/encoder/layer_11/attention/self/key/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel'] Skipping bert/encoder/layer_11/attention/self/key/kernel/adam_m Skipping bert/encoder/layer_11/attention/self/key/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias'] Skipping bert/encoder/layer_11/attention/self/query/bias/adam_m Skipping bert/encoder/layer_11/attention/self/query/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel'] Skipping bert/encoder/layer_11/attention/self/query/kernel/adam_m Skipping bert/encoder/layer_11/attention/self/query/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias'] Skipping bert/encoder/layer_11/attention/self/value/bias/adam_m Skipping bert/encoder/layer_11/attention/self/value/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel'] Skipping bert/encoder/layer_11/attention/self/value/kernel/adam_m Skipping bert/encoder/layer_11/attention/self/value/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias'] Skipping bert/encoder/layer_11/intermediate/dense/bias/adam_m Skipping bert/encoder/layer_11/intermediate/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel'] Skipping bert/encoder/layer_11/intermediate/dense/kernel/adam_m Skipping bert/encoder/layer_11/intermediate/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_11/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_11/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_11/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_11/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_11/output/dense/bias/adam_m Skipping bert/encoder/layer_11/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_11', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_11/output/dense/kernel/adam_m Skipping bert/encoder/layer_11/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_2/attention/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_2/attention/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_2/attention/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_2/attention/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_2/attention/output/dense/bias/adam_m Skipping bert/encoder/layer_2/attention/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_2/attention/output/dense/kernel/adam_m Skipping bert/encoder/layer_2/attention/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias'] Skipping bert/encoder/layer_2/attention/self/key/bias/adam_m Skipping bert/encoder/layer_2/attention/self/key/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel'] Skipping bert/encoder/layer_2/attention/self/key/kernel/adam_m Skipping bert/encoder/layer_2/attention/self/key/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias'] Skipping bert/encoder/layer_2/attention/self/query/bias/adam_m Skipping bert/encoder/layer_2/attention/self/query/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel'] Skipping bert/encoder/layer_2/attention/self/query/kernel/adam_m Skipping bert/encoder/layer_2/attention/self/query/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias'] Skipping bert/encoder/layer_2/attention/self/value/bias/adam_m Skipping bert/encoder/layer_2/attention/self/value/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel'] Skipping bert/encoder/layer_2/attention/self/value/kernel/adam_m Skipping bert/encoder/layer_2/attention/self/value/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias'] Skipping bert/encoder/layer_2/intermediate/dense/bias/adam_m Skipping bert/encoder/layer_2/intermediate/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel'] Skipping bert/encoder/layer_2/intermediate/dense/kernel/adam_m Skipping bert/encoder/layer_2/intermediate/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_2/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_2/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_2/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_2/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_2/output/dense/bias/adam_m Skipping bert/encoder/layer_2/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_2', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_2/output/dense/kernel/adam_m Skipping bert/encoder/layer_2/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_3/attention/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_3/attention/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_3/attention/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_3/attention/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_3/attention/output/dense/bias/adam_m Skipping bert/encoder/layer_3/attention/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_3/attention/output/dense/kernel/adam_m Skipping bert/encoder/layer_3/attention/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias'] Skipping bert/encoder/layer_3/attention/self/key/bias/adam_m Skipping bert/encoder/layer_3/attention/self/key/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel'] Skipping bert/encoder/layer_3/attention/self/key/kernel/adam_m Skipping bert/encoder/layer_3/attention/self/key/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias'] Skipping bert/encoder/layer_3/attention/self/query/bias/adam_m Skipping bert/encoder/layer_3/attention/self/query/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel'] Skipping bert/encoder/layer_3/attention/self/query/kernel/adam_m Skipping bert/encoder/layer_3/attention/self/query/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias'] Skipping bert/encoder/layer_3/attention/self/value/bias/adam_m Skipping bert/encoder/layer_3/attention/self/value/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel'] Skipping bert/encoder/layer_3/attention/self/value/kernel/adam_m Skipping bert/encoder/layer_3/attention/self/value/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias'] Skipping bert/encoder/layer_3/intermediate/dense/bias/adam_m Skipping bert/encoder/layer_3/intermediate/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel'] Skipping bert/encoder/layer_3/intermediate/dense/kernel/adam_m Skipping bert/encoder/layer_3/intermediate/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_3/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_3/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_3/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_3/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_3/output/dense/bias/adam_m Skipping bert/encoder/layer_3/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_3', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_3/output/dense/kernel/adam_m Skipping bert/encoder/layer_3/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_4/attention/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_4/attention/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_4/attention/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_4/attention/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_4/attention/output/dense/bias/adam_m Skipping bert/encoder/layer_4/attention/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_4/attention/output/dense/kernel/adam_m Skipping bert/encoder/layer_4/attention/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias'] Skipping bert/encoder/layer_4/attention/self/key/bias/adam_m Skipping bert/encoder/layer_4/attention/self/key/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel'] Skipping bert/encoder/layer_4/attention/self/key/kernel/adam_m Skipping bert/encoder/layer_4/attention/self/key/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias'] Skipping bert/encoder/layer_4/attention/self/query/bias/adam_m Skipping bert/encoder/layer_4/attention/self/query/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel'] Skipping bert/encoder/layer_4/attention/self/query/kernel/adam_m Skipping bert/encoder/layer_4/attention/self/query/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias'] Skipping bert/encoder/layer_4/attention/self/value/bias/adam_m Skipping bert/encoder/layer_4/attention/self/value/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel'] Skipping bert/encoder/layer_4/attention/self/value/kernel/adam_m Skipping bert/encoder/layer_4/attention/self/value/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias'] Skipping bert/encoder/layer_4/intermediate/dense/bias/adam_m Skipping bert/encoder/layer_4/intermediate/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel'] Skipping bert/encoder/layer_4/intermediate/dense/kernel/adam_m Skipping bert/encoder/layer_4/intermediate/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_4/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_4/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_4/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_4/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_4/output/dense/bias/adam_m Skipping bert/encoder/layer_4/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_4', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_4/output/dense/kernel/adam_m Skipping bert/encoder/layer_4/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_5/attention/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_5/attention/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_5/attention/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_5/attention/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_5/attention/output/dense/bias/adam_m Skipping bert/encoder/layer_5/attention/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_5/attention/output/dense/kernel/adam_m Skipping bert/encoder/layer_5/attention/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias'] Skipping bert/encoder/layer_5/attention/self/key/bias/adam_m Skipping bert/encoder/layer_5/attention/self/key/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel'] Skipping bert/encoder/layer_5/attention/self/key/kernel/adam_m Skipping bert/encoder/layer_5/attention/self/key/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias'] Skipping bert/encoder/layer_5/attention/self/query/bias/adam_m Skipping bert/encoder/layer_5/attention/self/query/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel'] Skipping bert/encoder/layer_5/attention/self/query/kernel/adam_m Skipping bert/encoder/layer_5/attention/self/query/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias'] Skipping bert/encoder/layer_5/attention/self/value/bias/adam_m Skipping bert/encoder/layer_5/attention/self/value/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel'] Skipping bert/encoder/layer_5/attention/self/value/kernel/adam_m Skipping bert/encoder/layer_5/attention/self/value/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias'] Skipping bert/encoder/layer_5/intermediate/dense/bias/adam_m Skipping bert/encoder/layer_5/intermediate/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel'] Skipping bert/encoder/layer_5/intermediate/dense/kernel/adam_m Skipping bert/encoder/layer_5/intermediate/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_5/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_5/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_5/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_5/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_5/output/dense/bias/adam_m Skipping bert/encoder/layer_5/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_5', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_5/output/dense/kernel/adam_m Skipping bert/encoder/layer_5/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_6/attention/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_6/attention/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_6/attention/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_6/attention/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_6/attention/output/dense/bias/adam_m Skipping bert/encoder/layer_6/attention/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_6/attention/output/dense/kernel/adam_m Skipping bert/encoder/layer_6/attention/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias'] Skipping bert/encoder/layer_6/attention/self/key/bias/adam_m Skipping bert/encoder/layer_6/attention/self/key/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel'] Skipping bert/encoder/layer_6/attention/self/key/kernel/adam_m Skipping bert/encoder/layer_6/attention/self/key/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias'] Skipping bert/encoder/layer_6/attention/self/query/bias/adam_m Skipping bert/encoder/layer_6/attention/self/query/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel'] Skipping bert/encoder/layer_6/attention/self/query/kernel/adam_m Skipping bert/encoder/layer_6/attention/self/query/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias'] Skipping bert/encoder/layer_6/attention/self/value/bias/adam_m Skipping bert/encoder/layer_6/attention/self/value/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel'] Skipping bert/encoder/layer_6/attention/self/value/kernel/adam_m Skipping bert/encoder/layer_6/attention/self/value/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias'] Skipping bert/encoder/layer_6/intermediate/dense/bias/adam_m Skipping bert/encoder/layer_6/intermediate/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel'] Skipping bert/encoder/layer_6/intermediate/dense/kernel/adam_m Skipping bert/encoder/layer_6/intermediate/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_6/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_6/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_6/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_6/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_6/output/dense/bias/adam_m Skipping bert/encoder/layer_6/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_6', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_6/output/dense/kernel/adam_m Skipping bert/encoder/layer_6/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_7/attention/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_7/attention/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_7/attention/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_7/attention/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_7/attention/output/dense/bias/adam_m Skipping bert/encoder/layer_7/attention/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_7/attention/output/dense/kernel/adam_m Skipping bert/encoder/layer_7/attention/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias'] Skipping bert/encoder/layer_7/attention/self/key/bias/adam_m Skipping bert/encoder/layer_7/attention/self/key/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel'] Skipping bert/encoder/layer_7/attention/self/key/kernel/adam_m Skipping bert/encoder/layer_7/attention/self/key/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias'] Skipping bert/encoder/layer_7/attention/self/query/bias/adam_m Skipping bert/encoder/layer_7/attention/self/query/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel'] Skipping bert/encoder/layer_7/attention/self/query/kernel/adam_m Skipping bert/encoder/layer_7/attention/self/query/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias'] Skipping bert/encoder/layer_7/attention/self/value/bias/adam_m Skipping bert/encoder/layer_7/attention/self/value/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel'] Skipping bert/encoder/layer_7/attention/self/value/kernel/adam_m Skipping bert/encoder/layer_7/attention/self/value/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias'] Skipping bert/encoder/layer_7/intermediate/dense/bias/adam_m Skipping bert/encoder/layer_7/intermediate/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel'] Skipping bert/encoder/layer_7/intermediate/dense/kernel/adam_m Skipping bert/encoder/layer_7/intermediate/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_7/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_7/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_7/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_7/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_7/output/dense/bias/adam_m Skipping bert/encoder/layer_7/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_7', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_7/output/dense/kernel/adam_m Skipping bert/encoder/layer_7/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_8/attention/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_8/attention/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_8/attention/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_8/attention/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_8/attention/output/dense/bias/adam_m Skipping bert/encoder/layer_8/attention/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_8/attention/output/dense/kernel/adam_m Skipping bert/encoder/layer_8/attention/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias'] Skipping bert/encoder/layer_8/attention/self/key/bias/adam_m Skipping bert/encoder/layer_8/attention/self/key/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel'] Skipping bert/encoder/layer_8/attention/self/key/kernel/adam_m Skipping bert/encoder/layer_8/attention/self/key/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias'] Skipping bert/encoder/layer_8/attention/self/query/bias/adam_m Skipping bert/encoder/layer_8/attention/self/query/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel'] Skipping bert/encoder/layer_8/attention/self/query/kernel/adam_m Skipping bert/encoder/layer_8/attention/self/query/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias'] Skipping bert/encoder/layer_8/attention/self/value/bias/adam_m Skipping bert/encoder/layer_8/attention/self/value/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel'] Skipping bert/encoder/layer_8/attention/self/value/kernel/adam_m Skipping bert/encoder/layer_8/attention/self/value/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias'] Skipping bert/encoder/layer_8/intermediate/dense/bias/adam_m Skipping bert/encoder/layer_8/intermediate/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel'] Skipping bert/encoder/layer_8/intermediate/dense/kernel/adam_m Skipping bert/encoder/layer_8/intermediate/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_8/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_8/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_8/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_8/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_8/output/dense/bias/adam_m Skipping bert/encoder/layer_8/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_8', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_8/output/dense/kernel/adam_m Skipping bert/encoder/layer_8/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_9/attention/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_9/attention/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_9/attention/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_9/attention/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_9/attention/output/dense/bias/adam_m Skipping bert/encoder/layer_9/attention/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_9/attention/output/dense/kernel/adam_m Skipping bert/encoder/layer_9/attention/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias'] Skipping bert/encoder/layer_9/attention/self/key/bias/adam_m Skipping bert/encoder/layer_9/attention/self/key/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel'] Skipping bert/encoder/layer_9/attention/self/key/kernel/adam_m Skipping bert/encoder/layer_9/attention/self/key/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias'] Skipping bert/encoder/layer_9/attention/self/query/bias/adam_m Skipping bert/encoder/layer_9/attention/self/query/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel'] Skipping bert/encoder/layer_9/attention/self/query/kernel/adam_m Skipping bert/encoder/layer_9/attention/self/query/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias'] Skipping bert/encoder/layer_9/attention/self/value/bias/adam_m Skipping bert/encoder/layer_9/attention/self/value/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel'] Skipping bert/encoder/layer_9/attention/self/value/kernel/adam_m Skipping bert/encoder/layer_9/attention/self/value/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias'] Skipping bert/encoder/layer_9/intermediate/dense/bias/adam_m Skipping bert/encoder/layer_9/intermediate/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel'] Skipping bert/encoder/layer_9/intermediate/dense/kernel/adam_m Skipping bert/encoder/layer_9/intermediate/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta'] Skipping bert/encoder/layer_9/output/LayerNorm/beta/adam_m Skipping bert/encoder/layer_9/output/LayerNorm/beta/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma'] Skipping bert/encoder/layer_9/output/LayerNorm/gamma/adam_m Skipping bert/encoder/layer_9/output/LayerNorm/gamma/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'output', 'dense', 'bias'] Skipping bert/encoder/layer_9/output/dense/bias/adam_m Skipping bert/encoder/layer_9/output/dense/bias/adam_v Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'output', 'dense', 'kernel'] Skipping bert/encoder/layer_9/output/dense/kernel/adam_m Skipping bert/encoder/layer_9/output/dense/kernel/adam_v Initialize PyTorch weight ['bert', 'pooler', 'dense', 'bias'] Skipping bert/pooler/dense/bias/adam_m Skipping bert/pooler/dense/bias/adam_v Initialize PyTorch weight ['bert', 'pooler', 'dense', 'kernel'] Skipping bert/pooler/dense/kernel/adam_m Skipping bert/pooler/dense/kernel/adam_v Skipping global_step Traceback (most recent call last): File "c:\programdata\anaconda3\envs\py37cuda10\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\programdata\anaconda3\envs\py37cuda10\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\ProgramData\Anaconda3\envs\py37cuda10\Scripts\transformers-cli.exe\__main__.py", line 7, in <module> File "c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\transformers\commands\transformers_cli.py", line 33, in main service.run() File "c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\transformers\commands\convert.py", line 91, in run convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output) File "c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\transformers\convert_bert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch load_tf_weights_in_bert(model, config, tf_checkpoint_path) File "c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\transformers\modeling_bert.py", line 135, in load_tf_weights_in_bert pointer = getattr(pointer, "bias") File "c:\programdata\anaconda3\envs\py37cuda10\lib\site-packages\torch\nn\modules\module.py", line 779, in __getattr__ type(self).__name__, name)) torch.nn.modules.module.ModuleAttributeError: 'BertForPreTraining' object has no attribute 'bias'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8242/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8241
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8241/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8241/comments
https://api.github.com/repos/huggingface/transformers/issues/8241/events
https://github.com/huggingface/transformers/pull/8241
734,697,397
MDExOlB1bGxSZXF1ZXN0NTE0MjE4MDg0
8,241
Update model cards of deepset/roberta-base-squad2 v1 and v2
{ "login": "brandenchan", "id": 33759007, "node_id": "MDQ6VXNlcjMzNzU5MDA3", "avatar_url": "https://avatars.githubusercontent.com/u/33759007?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brandenchan", "html_url": "https://github.com/brandenchan", "followers_url": "https://api.github.com/users/brandenchan/followers", "following_url": "https://api.github.com/users/brandenchan/following{/other_user}", "gists_url": "https://api.github.com/users/brandenchan/gists{/gist_id}", "starred_url": "https://api.github.com/users/brandenchan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brandenchan/subscriptions", "organizations_url": "https://api.github.com/users/brandenchan/orgs", "repos_url": "https://api.github.com/users/brandenchan/repos", "events_url": "https://api.github.com/users/brandenchan/events{/privacy}", "received_events_url": "https://api.github.com/users/brandenchan/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
Update model cards since deepset/roberta-base-squad2 is now being superced by deepset/robert-base-squad2-v2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8241/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8241/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8241", "html_url": "https://github.com/huggingface/transformers/pull/8241", "diff_url": "https://github.com/huggingface/transformers/pull/8241.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8241.patch", "merged_at": 1604506886000 }
https://api.github.com/repos/huggingface/transformers/issues/8240
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8240/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8240/comments
https://api.github.com/repos/huggingface/transformers/issues/8240/events
https://github.com/huggingface/transformers/pull/8240
734,665,126
MDExOlB1bGxSZXF1ZXN0NTE0MTkxNTMz
8,240
Add line by line option to mlm/plm scripts
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, I am currently executing the **run_mlm.py** file, because I do not know the entire code structure very well, and I am a little confused about the 262 lines of code in the **run_mlm.py** file. It is line 262 in the figure below. \r\n![2020-11-03 19-21-40屏幕截图](https://user-images.githubusercontent.com/39059793/97980374-a1070f00-1e0b-11eb-9ba7-11041d1049cc.png)\r\n\r\nI think whether False in **padding = \"max_length\" if data_args.pad_to_max_length else False** should be True. Still say that True or False has no effect on the result. Thank you. Please ignore if I understand it wrong. :)\r\n\r\nIn addition, the following figure shows the usage of **tokenizer** in the Transformers documentation.\r\n![2020-11-03 19-37-35屏幕截图](https://user-images.githubusercontent.com/39059793/97980647-15da4900-1e0c-11eb-8c86-891d37dcf901.png)\r\n\r\n", "Hi there, the test is as intended: the behavior is the following:\r\n- if `data_args.pad_to_max_length` is True, then we will pad to the maximum length of the model.\r\n- otherwise we don't pas (yet). Padding will done by the data collator so that we pad to the maximum length in the batch (dynamic padding).", "I got it, thanks!", "Hello, I still have a question to consult you. I want to train the **Translation Language Modeling (TLM)** in **XLM** (Paper: Cross-lingual Language Model Pretraining). The translation language modeling (**TLM**) is very similar to the **Masked Language Modeling (MLM)**, which only shows the difference in the form of input data. If I want to use the **run_mlm.py** file to achieve the effect of training the translation language modeling (**TLM**), can I just modify the composition of training data without modifying the source code of the **run_mlm.py** file? Is this feasible?\r\n\r\nFor example, for the masked language modeling (**MLM**), one row of my training data is a language, as shown below:\r\n\r\n( **Row 1** ) polonium 's isotopes tend to decay with alpha or beta decay ( **en** ) .\r\n( **Row 2** ) 231 and penetrated the armour of the Panzer IV behind it ( **en** ) .\r\n( **Row 3** ) die Isotope von Polonium neigen dazu , mit dem Alpha- oder Beta-Zerfall zu zerfallen ( **de** ) .\r\n( **Row 4** ) 231 und durchbrach die Rüstung des Panzers IV hinter ihm ( **de** ) .\r\n**...**\r\n\r\nFor the translation language modeling (**TLM**), my training data is a combination of two parallel corpora (It is to splice the above data in pairs. The separator is **[/s]**.), as shown below:\r\n\r\n( **Row 1** ) polonium 's isotopes tend to decay with alpha or beta decay ( **en** ) . **[/s]** die Isotope von Polonium neigen dazu , mit dem Alpha- oder Beta-Zerfall zu zerfallen ( **de** ) .\r\n( **Row 2** ) 231 and penetrated the armour of the Panzer IV behind it ( **en** ) . **[/s]** 231 und durchbrach die Rüstung des Panzers IV hinter ihm ( **de** ) .\r\n**...**\r\n\r\n\r\nIf I only modify the training data into a combination of two parallel corpora before executing the **run_mlm.py** file, can I achieve the effect of training the translation language modeling (**TLM**)?\r\n\r\nLooking forward to your answer, thank you very much!", "Hi @i-wanna-to this last question is something you should post on the forum for discussion at https://discuss.huggingface.co " ]
1,604
1,604
1,604
COLLABORATOR
null
# What does this PR do? The old `run_language_modeling` script was supporting the option to choose `line_by_line` or not for the datasets in MLM/PLM. This PR adds that option to `run_mlm` and `run_plm`. It also updates the README to present those options and adds a flag to disable dynamic batching on TPU: TPUs need all batches to always have the same size to avoid recompiling the code at each training/evaluation step. All scripts are tested on ditributed GPU/TPU env with or without the new flags and train to expected ppl on wikitext-2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8240/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8240", "html_url": "https://github.com/huggingface/transformers/pull/8240", "diff_url": "https://github.com/huggingface/transformers/pull/8240.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8240.patch", "merged_at": 1604338024000 }
https://api.github.com/repos/huggingface/transformers/issues/8239
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8239/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8239/comments
https://api.github.com/repos/huggingface/transformers/issues/8239/events
https://github.com/huggingface/transformers/pull/8239
734,605,107
MDExOlB1bGxSZXF1ZXN0NTE0MTQxNTI1
8,239
Fix TensorBoardCallback for older versions of PyTorch
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
COLLABORATOR
null
# What does this PR do? It looks like the olad `SummaryWriter` class from `tensorboardX` does not have all the methods of the more recent class in PyTorch, this PR just checks the method is there before using it. Fixes #8202
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8239/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8239", "html_url": "https://github.com/huggingface/transformers/pull/8239", "diff_url": "https://github.com/huggingface/transformers/pull/8239.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8239.patch", "merged_at": 1604331809000 }
https://api.github.com/repos/huggingface/transformers/issues/8238
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8238/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8238/comments
https://api.github.com/repos/huggingface/transformers/issues/8238/events
https://github.com/huggingface/transformers/pull/8238
734,597,284
MDExOlB1bGxSZXF1ZXN0NTE0MTM1MDcw
8,238
Patch reports
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
MEMBER
null
Patches the reports failures introduced by #8007. Removes the examples tests from the multi-gpu tests for now. Tests the pipelines in the TF suite.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8238/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8238", "html_url": "https://github.com/huggingface/transformers/pull/8238", "diff_url": "https://github.com/huggingface/transformers/pull/8238.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8238.patch", "merged_at": 1604330786000 }
https://api.github.com/repos/huggingface/transformers/issues/8237
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8237/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8237/comments
https://api.github.com/repos/huggingface/transformers/issues/8237/events
https://github.com/huggingface/transformers/pull/8237
734,586,377
MDExOlB1bGxSZXF1ZXN0NTE0MTI2MDU5
8,237
Fix bad import with PyTorch <= 1.4.1
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "FYI `SAVE_STATE_WARNING` has been removed 3 days ago: pytorch/pytorch#46813\r\n\r\nSo `transformers` needs to be recoded not to use that constant.\r\n\r\nLooking at its use, this probably would suffice:\r\n\r\n```\r\n--- a/src/transformers/trainer_pt_utils.py\r\n+++ b/src/transformers/trainer_pt_utils.py\r\n@@ -34,7 +34,7 @@ from .utils import logging\r\n if is_torch_tpu_available():\r\n import torch_xla.core.xla_model as xm\r\n\r\n-if version.parse(torch.__version__) <= version.parse(\"1.4.1\"):\r\n+if version.parse(torch.__version__) <= version.parse(\"1.4.1\") or version.parse(torch.__version__) > version.parse(\"1.7.0\"):\r\n SAVE_STATE_WARNING = \"\"\r\n else:\r\n from torch.optim.lr_scheduler import SAVE_STATE_WARNING\r\n```\r\n\r\nand perhaps adding a note why this was needed in first place." ]
1,604
1,607
1,604
COLLABORATOR
null
# What does this PR do? `trainer_pt_utils` imports `SAVE_STATE_WARNING` from PyTorch, which only exists in 1.5.0 or later. This fixes that problem. Fixes #8232
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8237/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8237/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8237", "html_url": "https://github.com/huggingface/transformers/pull/8237", "diff_url": "https://github.com/huggingface/transformers/pull/8237.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8237.patch", "merged_at": 1604330798000 }
https://api.github.com/repos/huggingface/transformers/issues/8236
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8236/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8236/comments
https://api.github.com/repos/huggingface/transformers/issues/8236/events
https://github.com/huggingface/transformers/issues/8236
734,570,663
MDU6SXNzdWU3MzQ1NzA2NjM=
8,236
Weird Behavior in Finetuning Pegasus on a Custom Dataset/Longer Summaries Generated
{ "login": "bharathc346", "id": 53350528, "node_id": "MDQ6VXNlcjUzMzUwNTI4", "avatar_url": "https://avatars.githubusercontent.com/u/53350528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bharathc346", "html_url": "https://github.com/bharathc346", "followers_url": "https://api.github.com/users/bharathc346/followers", "following_url": "https://api.github.com/users/bharathc346/following{/other_user}", "gists_url": "https://api.github.com/users/bharathc346/gists{/gist_id}", "starred_url": "https://api.github.com/users/bharathc346/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bharathc346/subscriptions", "organizations_url": "https://api.github.com/users/bharathc346/orgs", "repos_url": "https://api.github.com/users/bharathc346/repos", "events_url": "https://api.github.com/users/bharathc346/events{/privacy}", "received_events_url": "https://api.github.com/users/bharathc346/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Great Q!\r\nwe added `min_length=32` to many pegasus configs. Set `min_length=0` to fallback to the old behavior.\r\nYou shouldn't need to re-train." ]
1,604
1,604
1,604
NONE
null
## Environment info - `transformers` version: 3.4.0 - Platform: Linux-4.4.0-186-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): NA - Using GPU in script?: Yes - Using distributed or parallel set-up in script?:No ### Who can help @sshleifer ## Information I am using Pegasus The problem arises when using my own modified scripts The tasks I am working on summarization on my own dataset: I am finetuning `google/pegasus-cnn_dailymail` on my own dataset ## Problem I ran finetuning w/ `google/pegasus-cnn_dailymail` on my dataset about two weeks ago w/ similar code and got much better results. I have saved these checkpoints and will refer to them as the "old checkpoints" Now I am running roughly the same script finetuning `google/pegasus-cnn_dailymail` on my dataset and for some reason Pegasus seems to produce a lot of irrelevant tokens (maybe doesn't know when to stop properly). I also saved these checkpoints and will refer to them as the "new checkpoints". **Example** ``` Source: "Yes, please confirm the medication above. What do you think could be causing constipation? I eat well, exercise, drink a lot of water, etc." Predicted Target (old checkpoint): "Thinks could be causing constipation. Eats well, drinks a lot of water, etc." Predicted Target (new checkpoint): "Eats well. Exercised a lot of water above water. Constipation. Medications causing constipation. Is situated in the right-sided abdomen." ``` Both of the predicted targets were generated with the same decoding code so I do not think it is a problem there. Since the new checkpoint does not do as well as old I suspect I am doing something wrong in my training script. Here is how I am doing my training step: ``` def _train_step(self, batch): outputs = self._step(batch) lm_logits = outputs.logits labels = batch["target_input_ids"].to(self.device) loss = F.cross_entropy(lm_logits.view(-1, lm_logits.shape[-1]), labels.view(-1), ignore_index=0) return loss def _step(self, batch): pad_token_id = self.tokenizer.pad_token_id decoder_input_ids = shift_tokens_right( batch["target_input_ids"], pad_token_id).to(self.device) decoder_input_ids[:, 0] = self.tokenizer.pad_token_id return self.model( input_ids=batch["source_input_ids"].to(self.device), attention_mask=batch["source_attention_mask"].to(self.device), decoder_input_ids=decoder_input_ids, decoder_attention_mask=batch["target_attention_mask"].to( self.device), use_cache=False, return_dict=True, ) ``` I double checked this with code in `examples/seq2seq` and `modeling_bart` and it seems to be reasonable. Only difference is when I do the shift_tokens_right I make sure to use Pegasus's decoder_start_token_id of 0 = pad_token_id rather than eos. I tried w and w/o this and the results seem to be similar. Also I trained both checkpoints with batch size of 4 accumulating 64 batches so effective batch size is 256 as suggested in the paper. Any idea where I am going wrong with this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8236/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8236/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8235
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8235/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8235/comments
https://api.github.com/repos/huggingface/transformers/issues/8235/events
https://github.com/huggingface/transformers/pull/8235
734,497,776
MDExOlB1bGxSZXF1ZXN0NTE0MDUyNDMw
8,235
doc: fix typo
{ "login": "monperrus", "id": 803666, "node_id": "MDQ6VXNlcjgwMzY2Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/803666?v=4", "gravatar_id": "", "url": "https://api.github.com/users/monperrus", "html_url": "https://github.com/monperrus", "followers_url": "https://api.github.com/users/monperrus/followers", "following_url": "https://api.github.com/users/monperrus/following{/other_user}", "gists_url": "https://api.github.com/users/monperrus/gists{/gist_id}", "starred_url": "https://api.github.com/users/monperrus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/monperrus/subscriptions", "organizations_url": "https://api.github.com/users/monperrus/orgs", "repos_url": "https://api.github.com/users/monperrus/repos", "events_url": "https://api.github.com/users/monperrus/events{/privacy}", "received_events_url": "https://api.github.com/users/monperrus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes a typo ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8235/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8235", "html_url": "https://github.com/huggingface/transformers/pull/8235", "diff_url": "https://github.com/huggingface/transformers/pull/8235.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8235.patch", "merged_at": 1604325198000 }
https://api.github.com/repos/huggingface/transformers/issues/8234
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8234/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8234/comments
https://api.github.com/repos/huggingface/transformers/issues/8234/events
https://github.com/huggingface/transformers/issues/8234
734,483,682
MDU6SXNzdWU3MzQ0ODM2ODI=
8,234
filelock hangs for example script "run_language_modeling.py"
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pinging @sgugger", "How long did you let the script hang for? It's probably tokenizing your dataset, which might take a while. Did you try with smaller files to see if it still hanged?", "Hi @LysandreJik, I did try with a single 30MB file as reported, still hanging. It's hanging for hours. \r\nLately I've thought that it was because of a download, as per your source code where filelock is used, but I've used the model in a notebook before so it should be cached?\r\n\r\nEDIT: I'm very sorry, it is actually running now on the small file, I'm baffled - could've sworn it was stuck this week-end.\r\nCulprit could the tokenizer then indeed, but I'm unclear why the filelock would be the breaking point.\r\nI had modified the script file to force the logging level to be debug, and it does get stuck for multiple hours on one of the files when using the globbing pattern with `--train_data_files`", "The issue with the `run_language_modeling.py` script is that it does not leverage the fast tokenizers, so it can take a while to tokenize every file.\r\n\r\nThis script has been deprecated for a couple of days now, and we have introduced several different scripts in the [`examples/language-modeling` directory](https://github.com/huggingface/transformers/tree/master/examples/language-modeling).\r\n\r\nThese updated scripts now leverage the fast tokenizers by default, which should make it way faster to tokenize your files *and* you won't need to split your files into multiple small files anymore.\r\n\r\nLet me know if you get to use that script, and if it fits your needs.", "Yup I've actually tried right after my last comment to actually debug it and saw you had pushed a new script. Using it right now, seems to go smoothly for now (tokenizing the 7GB file, entering a third progress bar, first two lasted 40min each, i'm assuming it's the tokenizer).\r\nThanks, closing this!", "Glad it works!" ]
1,604
1,604
1,604
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.4.0 - Platform: Linux-3.10.0-957.27.2.el7.x86_64-x86_64-with-centos-7.8.2003-Core - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @julien-c (most frequent commiter on `git log examples/language-modeling/run_language_modeling.py`) ## Information Model I am using (Bert, XLNet ...): CamemBERT The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) lm * [ ] my own task or dataset: (give details below) ## To reproduce Context: my training dir is ~200 files of ~30MB, as per documentation instructions to keep train files small for the tokenizer (however since I'm finetuning from CamemBERT I wouldn't expect a tokenizer "train" to be run?) I'm unable to figure out why this freezes, looking for pointers getting the same behaviour with a single 30MB training file ``` python run_language_modeling.py \ --output_dir=output \ --model_name_or_path=camembert-base \ --do_train \ --train_data_files='/home/theo_nabla_com/data/mydata-corpus/chunk*' \ --do_eval \ --eval_data_file=/home/theo_nabla_com/data/mydata-corpus/valid \ --mlm \ --whole_word_mask 10/29/2020 08:09:13 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False 10/29/2020 08:09:13 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='output', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct29_08-09-13_google3-theo', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='output', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None) 10/29/2020 08:09:13 - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): s3.amazonaws.com:443 10/29/2020 08:09:14 - DEBUG - urllib3.connectionpool - https://s3.amazonaws.com:443 "HEAD /models.huggingface.co/bert/camembert-base-config.json HTTP/1.1" 200 0 10/29/2020 08:09:14 - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): s3.amazonaws.com:443 10/29/2020 08:09:14 - DEBUG - urllib3.connectionpool - https://s3.amazonaws.com:443 "HEAD /models.huggingface.co/bert/camembert-base-config.json HTTP/1.1" 200 0 10/29/2020 08:09:14 - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): s3.amazonaws.com:443 10/29/2020 08:09:14 - DEBUG - urllib3.connectionpool - https://s3.amazonaws.com:443 "HEAD /models.huggingface.co/bert/camembert-base-sentencepiece.bpe.model HTTP/1.1" 200 0 /home/theo_nabla_com/code/transformers/src/transformers/modeling_auto.py:822: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models. FutureWarning, 10/29/2020 08:09:14 - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): cdn.huggingface.co:443 10/29/2020 08:09:14 - DEBUG - urllib3.connectionpool - https://cdn.huggingface.co:443 "HEAD /camembert-base-pytorch_model.bin HTTP/1.1" 200 0 Some weights of CamembertForMaskedLM were not initialized from the model checkpoint at camembert-base and are newly initialized: ['lm_head.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. /home/theo_nabla_com/code/transformers/src/transformers/tokenization_utils_base.py:1421: FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead. FutureWarning, 10/29/2020 08:09:19 - DEBUG - filelock - Attempting to acquire lock 140320072690936 on /home/theo_nabla_com/data/mydata-corpus/cached_lm_CamembertTokenizer_510_chunkaj.lock 10/29/2020 08:09:19 - INFO - filelock - Lock 140320072690936 acquired on /home/theo_nabla_com/data/mydata-corpus/cached_lm_CamembertTokenizer_510_chunkaj.lock ``` (posted this on the discuss but it didn't get attention, [here](https://discuss.huggingface.co/t/hang-in-language-modelling-script/1792))
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8234/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8233
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8233/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8233/comments
https://api.github.com/repos/huggingface/transformers/issues/8233/events
https://github.com/huggingface/transformers/issues/8233
734,436,687
MDU6SXNzdWU3MzQ0MzY2ODc=
8,233
Contributing trained Greek<->English NMT models implemented with fairseq
{ "login": "lighteternal", "id": 22905968, "node_id": "MDQ6VXNlcjIyOTA1OTY4", "avatar_url": "https://avatars.githubusercontent.com/u/22905968?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lighteternal", "html_url": "https://github.com/lighteternal", "followers_url": "https://api.github.com/users/lighteternal/followers", "following_url": "https://api.github.com/users/lighteternal/following{/other_user}", "gists_url": "https://api.github.com/users/lighteternal/gists{/gist_id}", "starred_url": "https://api.github.com/users/lighteternal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lighteternal/subscriptions", "organizations_url": "https://api.github.com/users/lighteternal/orgs", "repos_url": "https://api.github.com/users/lighteternal/repos", "events_url": "https://api.github.com/users/lighteternal/events{/privacy}", "received_events_url": "https://api.github.com/users/lighteternal/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "We'd love to help you share your models! Both @sshleifer and @stas00 worked on MT models and used Fairseq recently so might be able to help.", "If I'm not mistaken the only difference between [wmt19](https://github.com/pytorch/fairseq/blob/master/examples/wmt19/README.md) and iwslt is the configuration of the layers. In which case it should be trivial to port it to `transformers` via [FSMT](https://huggingface.co/transformers/model_doc/fsmt.html). FSMT = FairSeqMachineTranslation.\r\n\r\nYou can try it yourself using the [conversion script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py) and if you get stuck please ask for help, pasting the code of what you have tried. You can see how it is used [here](https://github.com/huggingface/transformers/blob/master/scripts/fsmt/convert-facebook-wmt19.sh) and 2 more [here](https://github.com/huggingface/transformers/tree/master/scripts/fsmt).\r\n\r\nThe only thing the script can't automate at the moment is hyper param presetting, since they are not part of the model dump, we probably need to add clargs to optionally set those. Until now I embedded them in the script itself but that's not the best way to move forward. But let's handle that when everything else is working for you, the converted model will just use the default hparam settings.", "Many thanks for the prompt response. I will try the script and update on the progress. \r\nApart from the model weights themselves, I assume I will need to take care of the preprocessing (Moses tokenization and fastBPE) as well, in order to load the model and perform inference without issues. ", "FSMT already does moses+bpe. No pre- or post-processing is required.", "That's great, thx! I also just read it on the FSMT doc. ^_^", "Edit: Updated with proper code block formatting. \r\n\r\nSorry for the delay @stas00! After updating to the latest transformers and fairseq versions, I had some progress. \r\n\r\nOK so I followed the steps and it seems that the conversion starts succesfully using this command: \r\n\r\n```\r\nPYTHONPATH=\"src\" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/trans_elen/checkpoint_best.pt --pytorch_dump_folder_path data/wmt16-el-en-dist\r\n```\r\n\r\nBut after a few seconds, it returns an error:\r\n```\r\n(base) earendil@C3PO$~/Desktop/conversion/transformers PYTHONPATH=\"src\" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/trans_elen/checkpoint_best.pt --pytorch_dump_folder_path data/wmt16-el-en-dist\r\n/home/earendil/anaconda3/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\r\n/home/earendil/anaconda3/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\r\n/home/earendil/anaconda3/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\r\n/home/earendil/anaconda3/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\r\n/home/earendil/anaconda3/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\r\n/home/earendil/anaconda3/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\r\n/home/earendil/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\r\n/home/earendil/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\r\n/home/earendil/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\r\n/home/earendil/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\r\n/home/earendil/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\r\n/home/earendil/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\r\nWriting results to data/wmt16-el-en-dist\r\nusing checkpoint checkpoint_best.pt\r\n/home/earendil/anaconda3/lib/python3.6/site-packages/hydra/_internal/hydra.py:71: UserWarning: \r\[email protected](strict) flag is deprecated and will removed in the next version.\r\nSee https://hydra.cc/docs/next/upgrades/0.11_to_1.0/strict_mode_flag_deprecated\r\n warnings.warn(message=msg, category=UserWarning)\r\nTraceback (most recent call last):\r\n File \"src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py\", line 271, in <module>\r\n convert_fsmt_checkpoint_to_pytorch(args.fsmt_checkpoint_path, args.pytorch_dump_folder_path)\r\n File \"src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py\", line 118, in convert_fsmt_checkpoint_to_pytorch\r\n src_lang = args[\"source_lang\"]\r\nKeyError: 'source_lang'\r\n```\r\n\r\nWhich I cannot debug since I don't recall inputting any argument regarding src and tgt languages. Aren't these arguments acquired from the model checkpoint?", "Could you please re-edit you comment and use proper code block formatting? it's impossible to figure out what it says since there are warnings mixed in - the new lines are needed to be able to parse it.\r\n\r\nPlease use the menu bar (`<>` button) or start/end with three backticks if you do it manually.\r\n\r\nIt should appear like so (I pasted a totally random error just as a demo):\r\n\r\n```\r\n \"\"\"\r\n tens_ops = (input, weight)\r\n if not torch.jit.is_scripting():\r\n if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops):\r\n return handle_torch_function(linear, tens_ops, input, weight, bias=bias)\r\n if input.dim() == 2 and bias is not None:\r\n # fused op is marginally faster\r\n ret = torch.addmm(bias, input, weight.t())\r\n else:\r\n> output = input.matmul(weight.t())\r\nE RuntimeError: CUDA out of memory. Tried to allocate 100.00 MiB (GPU 0; 23.70 GiB total capacity; 21.83 GiB already allocated; 19.69 MiB free; 22.08 GiB reserved in total by PyTorch)\r\n```\r\n", "Sorry, just fixed it.", "Ah, much better - thank you!\r\n\r\nSo your model is different from wmt19's series, it fails here:\r\n\r\n```\r\n src_lang = args[\"source_lang\"]\r\n tgt_lang = args[\"target_lang\"]\r\n```\r\nwhich comes from the checkpoint we are trying to convert.\r\n\r\nBefore it fails do:\r\n```\r\nprint(args.keys())\r\n```\r\nand see what you have in there. \r\n\r\nMost likely you're converting a different architecture, in which case this script won't work as is.\r\n\r\nIf you can't figure it out please send me the info at how to get the checkpoint and all the vocab/config files it comes with and I will have a look.\r\n", "The output of `print(args.keys())` is :\r\n\r\n\r\n```\r\ndict_keys(['_metadata', '_parent', '_content'])\r\n```", "OK, so this is a totally different arch then. In wmt19 the args contain a large set of model configuration, see: a few paragraphs into this section https://huggingface.co/blog/porting-fsmt#porting-weights-and-configuration\r\n\r\nSo where does it store the model configuration? Or does it not and there is just a fixed config - in which case what is it? how does one derive this from the checkpoint? Is it possible that you forgot to save it in the checkpoint? OR the code you were using for some reason wasn't saving it? \r\n\r\nIn addition to answering the above, please send me the download info (checkpoint file, and dict, config files) and I will see whether the FSMT arch can somehow be re-used.", "I am not sure where the model configuration is saved, tbh. In my implementation I was just following the steps from this guide: \r\nhttps://github.com/pytorch/fairseq/tree/master/examples/translation#training-a-new-model\r\nbut using my own data of course. If you check the following script: \r\n```\r\nCUDA_VISIBLE_DEVICES=0 fairseq-train \\\r\n data-bin/iwslt14.tokenized.de-en \\\r\n --arch transformer_iwslt_de_en --share-decoder-input-output-embed \\\r\n --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \\\r\n --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \\\r\n --dropout 0.3 --weight-decay 0.0001 \\\r\n --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \\\r\n --max-tokens 4096 \\\r\n --eval-bleu \\\r\n --eval-bleu-args '{\"beam\": 5, \"max_len_a\": 1.2, \"max_len_b\": 10}' \\\r\n --eval-bleu-detok moses \\\r\n --eval-bleu-remove-bpe \\\r\n --eval-bleu-print-samples \\\r\n --best-checkpoint-metric bleu --maximize-best-checkpoint-metric\r\n```\r\nit seems that the `--arch transformer_iwslt_de_en` is enough for the trainer to understand the architecture (according to this [post](https://github.com/pytorch/fairseq/issues/1301), the key difference is in the ffn hidden dim ,iwslt_de_en is 1024 and transformer is 2048) . \r\n\r\nI am uploading the files to a GDrive folder (it will take a while for the checkpoint) and will email you with the link if that's ok (mail found on your website).", "Thank you for that info, @lighteternal. \r\n\r\nI will have a look at the data you sent to me (thank you) and will get back to you.\r\n\r\n", "Let's continue over https://github.com/huggingface/transformers/pull/8374", "Closing this, as it's solved by by @stas00 in #8374 " ]
1,604
1,604
1,604
NONE
null
Hi there, quick question that I couldn't answer by searching the docs: I trained an EL-EN (Greek to English) and an EN-EL machine translation model using the fairseq implementation of the `transformer_iwslt_de_en `architecture on ~6GB of parallel corpora. Given that the models report a better BLEU score compared to the existing SotA, I would like to share them somehow. I thought that fairseq might offer a huggingface-like way to upload trained models but I couldn't find any, so I would appreciate any guidance. If there's a straightforward way to convert and upload these as huggingface models it would be great! Many thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8233/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8232
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8232/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8232/comments
https://api.github.com/repos/huggingface/transformers/issues/8232/events
https://github.com/huggingface/transformers/issues/8232
734,422,578
MDU6SXNzdWU3MzQ0MjI1Nzg=
8,232
ImportError: cannot import name 'SAVE_STATE_WARNING' from 'torch.optim.lr_scheduler'
{ "login": "yuchenlin", "id": 10104354, "node_id": "MDQ6VXNlcjEwMTA0MzU0", "avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuchenlin", "html_url": "https://github.com/yuchenlin", "followers_url": "https://api.github.com/users/yuchenlin/followers", "following_url": "https://api.github.com/users/yuchenlin/following{/other_user}", "gists_url": "https://api.github.com/users/yuchenlin/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuchenlin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuchenlin/subscriptions", "organizations_url": "https://api.github.com/users/yuchenlin/orgs", "repos_url": "https://api.github.com/users/yuchenlin/repos", "events_url": "https://api.github.com/users/yuchenlin/events{/privacy}", "received_events_url": "https://api.github.com/users/yuchenlin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Oh I didn't check when they added this. Do you know if PyTorch 1.4.0 is the last version without it? Will add a fix this morning.", "thank you for the quick fix.", "`SAVE_STATE_WARNING` has been removed 3 days ago: https://github.com/pytorch/pytorch/pull/46813\r\n\r\nNeed to update https://github.com/huggingface/transformers/pull/8237 to reflect this change." ]
1,604
1,607
1,607
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: ubuntu 18.04 - Python version: 3.7 - PyTorch version (GPU?): 1.4.0 - Tensorflow version (GPU?): - Using GPU in script?: no - Using distributed or parallel set-up in script?:no ### Who can help Trainer: @sgugger ## Information This import is not compatible with PyTorch 1.4.0 The problem arises when using: * [ *] the official example scripts: (give details below) The tasks I am working on is: * [ *] an official GLUE/SQUaD task: (give the name) ## To reproduce Steps to reproduce the behavior: ```python >>> from transformers import PreTrainedTokenizer, is_tf_available, is_torch_available Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/yuchenlin/anaconda3/envs/mcqa/lib/python3.7/site-packages/transformers/__init__.py", line 611, in <module> from .trainer import Trainer File "/home/yuchenlin/anaconda3/envs/mcqa/lib/python3.7/site-packages/transformers/trainer.py", line 69, in <module> from .trainer_pt_utils import ( File "/home/yuchenlin/anaconda3/envs/mcqa/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 26, in <module> from torch.optim.lr_scheduler import SAVE_STATE_WARNING ImportError: cannot import name 'SAVE_STATE_WARNING' from 'torch.optim.lr_scheduler' (/home/yuchenlin/anaconda3/envs/mcqa/lib/python3.7/site-packages/torch/optim/lr_scheduler.py) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8232/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8232/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8231
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8231/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8231/comments
https://api.github.com/repos/huggingface/transformers/issues/8231/events
https://github.com/huggingface/transformers/pull/8231
734,372,830
MDExOlB1bGxSZXF1ZXN0NTEzOTQ3ODE0
8,231
Tf longformer for sequence classification
{ "login": "elk-cloner", "id": 5828101, "node_id": "MDQ6VXNlcjU4MjgxMDE=", "avatar_url": "https://avatars.githubusercontent.com/u/5828101?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elk-cloner", "html_url": "https://github.com/elk-cloner", "followers_url": "https://api.github.com/users/elk-cloner/followers", "following_url": "https://api.github.com/users/elk-cloner/following{/other_user}", "gists_url": "https://api.github.com/users/elk-cloner/gists{/gist_id}", "starred_url": "https://api.github.com/users/elk-cloner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elk-cloner/subscriptions", "organizations_url": "https://api.github.com/users/elk-cloner/orgs", "repos_url": "https://api.github.com/users/elk-cloner/repos", "events_url": "https://api.github.com/users/elk-cloner/events{/privacy}", "received_events_url": "https://api.github.com/users/elk-cloner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@elk-cloner - thanks a lot for taking a look into this! \r\n\r\nWould be awesome to fix the TFLongformer related tests. There seem to be some obvious bug: `UnboundLocalError: local variable 'input_ids' referenced before assignment` . \r\n\r\nI'll do a longer review once these tests are fixed :-) Lemme know if you need help at some point.", "@patrickvonplaten i have passed all the tests but got stuck in `test_inputs_embeds` when it's checking `TFLongformerForMultipleChoice` model, i debugged my code and found out that `inputs_embeds` shape is not same when `TFLongformerEmbeddings` get call from [here](https://github.com/elk-cloner/transformers/blob/28ab848279d31970c9f3390a480041eca2beee82/src/transformers/modeling_tf_longformer.py#L2232)(test_inputs_embeds) and [here](https://github.com/elk-cloner/transformers/blob/28ab848279d31970c9f3390a480041eca2beee82/tests/test_modeling_tf_common.py#L702)(TFLongformerForMultipleChoice), but don't know how to fix it, can you help me ?", "Hey @elk-cloner,\r\n\r\nyeah this problem was not at all obvious! Thanks for letting me know :-) For Multiple Choice, we have to make sure that the position_ids stay 2-dimensional, which is only relevant for TFLongformer, but not for other TF models -> so we need this `if` fix here. \r\n\r\nFeel free to ping me again, when you're ready with the PR or need help :-) ", "@patrickvonplaten all tests have passed, can you take a look ?", "Hey @elk-cloner - the signature of the function calls should be done analogs to the one in other `modeling_tf_....py` files. Woud be great if you can fix that before we merge. ", "Good to merge IMO! ", "Checked the slow tests and everything passes. Great job @elk-cloner! Longformer is definitely not the easiest model", "Would be awesome if @LysandreJik and @sgugger can take a final look, then we're good to merge." ]
1,604
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> implement SequenceClassification, MultipleChoice and TokenClassification classes for TFLongformer. Resolves #6401 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Longformer, Reformer: @patrickvonplaten -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8231/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8231", "html_url": "https://github.com/huggingface/transformers/pull/8231", "diff_url": "https://github.com/huggingface/transformers/pull/8231.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8231.patch", "merged_at": 1605800248000 }
https://api.github.com/repos/huggingface/transformers/issues/8230
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8230/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8230/comments
https://api.github.com/repos/huggingface/transformers/issues/8230/events
https://github.com/huggingface/transformers/pull/8230
734,364,526
MDExOlB1bGxSZXF1ZXN0NTEzOTQwOTY1
8,230
Fixed emmental example.
{ "login": "madlag", "id": 272253, "node_id": "MDQ6VXNlcjI3MjI1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4", "gravatar_id": "", "url": "https://api.github.com/users/madlag", "html_url": "https://github.com/madlag", "followers_url": "https://api.github.com/users/madlag/followers", "following_url": "https://api.github.com/users/madlag/following{/other_user}", "gists_url": "https://api.github.com/users/madlag/gists{/gist_id}", "starred_url": "https://api.github.com/users/madlag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/madlag/subscriptions", "organizations_url": "https://api.github.com/users/madlag/orgs", "repos_url": "https://api.github.com/users/madlag/repos", "events_url": "https://api.github.com/users/madlag/events{/privacy}", "received_events_url": "https://api.github.com/users/madlag/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "looks good to me!", "looks good to me too!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,619
1,619
CONTRIBUTOR
null
Added information about loading of squad data in README. Fixed BertLayerNorm which disappeared some time ago, replace with torch.nn.LayerNorm (which was buggy long time ago it seems).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8230/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8230", "html_url": "https://github.com/huggingface/transformers/pull/8230", "diff_url": "https://github.com/huggingface/transformers/pull/8230.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8230.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8229
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8229/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8229/comments
https://api.github.com/repos/huggingface/transformers/issues/8229/events
https://github.com/huggingface/transformers/issues/8229
734,239,498
MDU6SXNzdWU3MzQyMzk0OTg=
8,229
is it possible to extract the attention weights on test inputs when the pretrained model is fine-tuned on custom data?
{ "login": "hittle2015", "id": 10519611, "node_id": "MDQ6VXNlcjEwNTE5NjEx", "avatar_url": "https://avatars.githubusercontent.com/u/10519611?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hittle2015", "html_url": "https://github.com/hittle2015", "followers_url": "https://api.github.com/users/hittle2015/followers", "following_url": "https://api.github.com/users/hittle2015/following{/other_user}", "gists_url": "https://api.github.com/users/hittle2015/gists{/gist_id}", "starred_url": "https://api.github.com/users/hittle2015/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hittle2015/subscriptions", "organizations_url": "https://api.github.com/users/hittle2015/orgs", "repos_url": "https://api.github.com/users/hittle2015/repos", "events_url": "https://api.github.com/users/hittle2015/events{/privacy}", "received_events_url": "https://api.github.com/users/hittle2015/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
# ❓ Questions & Help I am wondering if it's possible to look into the attention weights on test data when the fine-tuned model is running. I tried to look for some docs for help but could not find useful guidance. Any pointers will be appreciated. Thanks a lot in advance. <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8229/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8228
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8228/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8228/comments
https://api.github.com/repos/huggingface/transformers/issues/8228/events
https://github.com/huggingface/transformers/issues/8228
734,162,194
MDU6SXNzdWU3MzQxNjIxOTQ=
8,228
segmentation fault (core dumped) proxychains4 python xxx.py
{ "login": "Jasonsey", "id": 26242648, "node_id": "MDQ6VXNlcjI2MjQyNjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/26242648?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jasonsey", "html_url": "https://github.com/Jasonsey", "followers_url": "https://api.github.com/users/Jasonsey/followers", "following_url": "https://api.github.com/users/Jasonsey/following{/other_user}", "gists_url": "https://api.github.com/users/Jasonsey/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jasonsey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jasonsey/subscriptions", "organizations_url": "https://api.github.com/users/Jasonsey/orgs", "repos_url": "https://api.github.com/users/Jasonsey/repos", "events_url": "https://api.github.com/users/Jasonsey/events{/privacy}", "received_events_url": "https://api.github.com/users/Jasonsey/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.4.0 - Platform: ubuntu 18.04 - Python version: 3.6 - PyTorch version (GPU?): 1.4.0 GPU - Tensorflow version (GPU?): 2.2.0 GPU - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no - Network: need to use http proxy to download files and I use the tool ProxyChains4 ## To reproduce Steps to reproduce the behavior: 1. Save the following code to file test.py ```python from transformers import pipeline classifier = pipeline('sentiment-analysis') classifier('We are very happy to include pipeline into the transformers repository.') ``` 2. Exec `proxychains4 python test.py` 3. The following error was raised ```shell (test) ➜ test-transformer proxychains4 python test.py [proxychains] config file found: /etc/proxychains4.conf [proxychains] preloading /usr/lib/x86_64-linux-gnu/libproxychains.so.4 [proxychains] DLL init: proxychains-ng 4.12 [proxychains] DLL init: proxychains-ng 4.12 [proxychains] Strict chain ... 10.74.193.90:80 ... s3.amazonaws.com:443 ... OK [proxychains] Strict chain ... 10.74.193.90:80 ... s3.amazonaws.com:443 ... OK [proxychains] Strict chain ... 10.74.193.90:80 ... s3.amazonaws.com:443 ... OK [proxychains] Strict chain ... 10.74.193.90:80 ... s3.amazonaws.com:443 ... OK [proxychains] Strict chain ... 10.74.193.90:80 ... s3.amazonaws.com:443 ... OK [proxychains] Strict chain ... 10.74.193.90:80 ... s3.amazonaws.com:443 ... OK [proxychains] Strict chain ... 10.74.193.90:80 ... s3.amazonaws.com:443 ... OK [proxychains] Strict chain ... 10.74.193.90:80 ... s3.amazonaws.com:443 ... OK [proxychains] Strict chain ... 10.74.193.90:80 ... cdn.huggingface.co:443 ... OK [1] 9790 segmentation fault (core dumped) proxychains4 python test.py ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The model files can be downloaded without error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8228/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8228/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8227
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8227/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8227/comments
https://api.github.com/repos/huggingface/transformers/issues/8227/events
https://github.com/huggingface/transformers/issues/8227
734,124,984
MDU6SXNzdWU3MzQxMjQ5ODQ=
8,227
convert_graph_to_onnx.py and associated example notebook are broken for TensorFlow
{ "login": "amaiya", "id": 47191980, "node_id": "MDQ6VXNlcjQ3MTkxOTgw", "avatar_url": "https://avatars.githubusercontent.com/u/47191980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amaiya", "html_url": "https://github.com/amaiya", "followers_url": "https://api.github.com/users/amaiya/followers", "following_url": "https://api.github.com/users/amaiya/following{/other_user}", "gists_url": "https://api.github.com/users/amaiya/gists{/gist_id}", "starred_url": "https://api.github.com/users/amaiya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amaiya/subscriptions", "organizations_url": "https://api.github.com/users/amaiya/orgs", "repos_url": "https://api.github.com/users/amaiya/repos", "events_url": "https://api.github.com/users/amaiya/events{/privacy}", "received_events_url": "https://api.github.com/users/amaiya/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "## Environment Info\r\n- `transformers` version: 3.4.0\r\n- Platform: Linux-4.15.0-108-generic-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.6.9\r\n- PyTorch version (GPU?): 1.6.0+cpu (False)\r\n- Tensorflow version (GPU?): 2.3.1 (False)\r\n\r\nAlso, reproduced on Google Colab, as indicated above.\r\n\r\n## Who can help\r\nTensorFlow: @jplu \r\nONNX: @mfuntowicz @sgugger @LysandreJik\r\n", "You have to create your own model with the size you need and then use the script to convert it. All the TF models are by default initialized with input sequence of 5 tokens.", "@jplu Thanks. Would you mind clarifying what you mean by \"create your own model with the size you need\"? I'm creating and fine-tuning a model with `TFBertForSequenceClassification.from_pretrained` and was trying to use the example notebook to convert it. ", "What I mean is that you have to update the input shape of your model. When you do:\r\n\r\n```\r\nTFBertForSequenceClassification.from_pretrained(\"name\")\r\n```\r\n\r\nYou model is initialized with a [dummy input](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_utils.py#L330) of [5 tokens](https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L215). Then when you use your model the max length allowed is 5 by default.\r\n\r\nIf you want to use a model with a larger max length you have to update your input shape with:\r\n```\r\nfrom transformers import TFBertForSequenceClassification, BertTokenizerFast\r\nimport tensorflow as tf\r\nmy_model_name = \"bert-base-cased\" # replace here by the name of your model\r\ntokenizer = BertTokenizerFast.from_pretrained(my_model_name )\r\nmodel = TFBertForSequenceClassification.from_pretrained(my_model_name )\r\nsize = 510 # the max length you expect for your model. Don't forget the two extra tokens of start and end, here 510 + 2 to make 512 which is the max length allowed for all the models (except longformer).\r\ninputs_dict = tokenizer(\"hello\" * [size], return_tensors=\"tf\")\r\nmodel._saved_model_inputs_spec = None\r\nmodel._set_save_spec(inputs_dict)\r\ntf.saved_model.save(model, \"path\")\r\n``` \r\n\r\nAnd then you can create your ONNX model afterward from your saved model that will take as input your proper input length.", "@jplu Thank you!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "> \r\nNotEncodableError: No encoder for object {'input_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='input_ids/input_ids'), 'token_type_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name='token_type_ids'), 'attention_mask': TensorSpec(shape=(None, 128), dtype=tf.int32, name='attention_mask')} of type <class 'transformers.tokenization_utils_base.BatchEncoding'>. \r\n\r\nI am using the mobileBert model. But when I follow this procedure to save model in Saved Model fromat, it gives the error above. Any suggestions? Thanks! \r\n" ]
1,604
1,648
1,610
NONE
null
## Information The `convert_graph_to_onnx.py` file and the associated [example notebook](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb) appear to be broken for TensorFlow. For ONNX-exported TensorFlow models, **only input tokens of length 5 are accepted**. Other inputs (e.g., `len(tokens)>5`) result in an error: ``` InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: input_ids for the following indices index: 1 Got: 6 Expected: 5 ``` Also, If you run `session.get_inputs()` on ONNX-exported TensorFlow model, only `input_ids` key is listed as inputs (i.e., no `attention_mask`) while ONNX PyTorch behaves differently: ```python # ONNX TensorFlow inputs for BERT model print([input.name for input in cpu_model.get_inputs()]) # only prints 'input_ids' - no 'attention_mask' # ONNX PyTorch inputs for BERT model print([input.name for input in cpu_model.get_inputs()]) # prints ['input_ids', 'attention_mask', 'token_type_ids'] ``` ## How to Reproduce In the [example notebook](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb), uncomment this TensorFlow `convert` line: ``` convert(framework="tf", model="bert-base-cased", output="onnx-test-tf/bert-base-cased.onnx", opset=11) ``` I have also posted [this Google Colab notebook](https://colab.research.google.com/drive/1A2frWgfRlL5Ysf7xVVifx58NmEoxeYmu?usp=sharing) that more concisely reproduces this issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8227/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8226
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8226/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8226/comments
https://api.github.com/repos/huggingface/transformers/issues/8226/events
https://github.com/huggingface/transformers/pull/8226
734,121,135
MDExOlB1bGxSZXF1ZXN0NTEzNzM4MTM3
8,226
[bart] 2 SinusoidalPositionalEmbedding fixes
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
This PR: * `embedding_dim` param for `SinusoidalPositionalEmbedding` can now be odd. * fixes a bug "RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation" appearing in pytorch-1.8+ (this var requires no grad, so make it so before we do anything grad-related with it). Fixes: #8021 @sshleifer, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8226/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8226", "html_url": "https://github.com/huggingface/transformers/pull/8226", "diff_url": "https://github.com/huggingface/transformers/pull/8226.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8226.patch", "merged_at": 1604361026000 }
https://api.github.com/repos/huggingface/transformers/issues/8225
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8225/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8225/comments
https://api.github.com/repos/huggingface/transformers/issues/8225/events
https://github.com/huggingface/transformers/issues/8225
734,078,745
MDU6SXNzdWU3MzQwNzg3NDU=
8,225
When would pegasus be able to be exported in ONNX format?
{ "login": "phosfuldev", "id": 37611258, "node_id": "MDQ6VXNlcjM3NjExMjU4", "avatar_url": "https://avatars.githubusercontent.com/u/37611258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phosfuldev", "html_url": "https://github.com/phosfuldev", "followers_url": "https://api.github.com/users/phosfuldev/followers", "following_url": "https://api.github.com/users/phosfuldev/following{/other_user}", "gists_url": "https://api.github.com/users/phosfuldev/gists{/gist_id}", "starred_url": "https://api.github.com/users/phosfuldev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phosfuldev/subscriptions", "organizations_url": "https://api.github.com/users/phosfuldev/orgs", "repos_url": "https://api.github.com/users/phosfuldev/repos", "events_url": "https://api.github.com/users/phosfuldev/events{/privacy}", "received_events_url": "https://api.github.com/users/phosfuldev/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false } ]
[ "@patil-suraj has a partial solution that he just posted to the forums. he might be able to extend that to Pegasus/BART ", "I'm on it! Will ping here once I get it working.\r\n\r\n@phosfuldev, you can refer to this post to see how T5 is exported to onnx\r\nhttps://discuss.huggingface.co/t/speeding-up-t5-inference/1841", "@sshleifer @patil-suraj Thank you!!", "Thank you so much @patil-suraj for taking the initiative to export `Pegasus` to onnx. Eagerly waiting for it :) ", "Hi @patil-suraj \r\n\r\nPlease let us know if you have any update on exporting Pegasus to Onnx format.\r\n\r\nApologies for bothering you. \r\n\r\nThanks,\r\nKarthik", "I was about to open a new issue and then discovered this one. For reference, this is where I got stopped when trying to export a Pegasus model in ONNX format:\r\n\r\n\r\nI am using a recent clone of the `transformers` repository, cloned on `feb 18 2021`\r\n\r\nUnless I am doing something wrong, I think that the `convert_graph_to_onnx.py` script does not currently work with Pegasus models.\r\n\r\nI tried it with both `pegasus_large`, and a model that I have fined-tuned, that is based, on `pegasus_large`, with a command like this....\r\n\r\ncommand: `python3 -m transformers.convert_graph_to_onnx --framework pt --model ../models_foreign/pegasus_large ./onnx/onnx_model.onnx`\r\n\r\nand in both cases, I got this console output....\r\n\r\nconsole output\r\n```\r\n====== Converting model to ONNX ======\r\nONNX opset version set to: 11\r\nLoading pipeline (model: ../models_foreign/pegasus_large, tokenizer: ../models_foreign/pegasus_large)\r\nSome weights of PegasusModel were not initialized from the model checkpoint at ../models_foreign/pegasus_large and are newly initialized: ['model.encoder.embed_positions.weight', 'model.decoder.embed_positions.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nUsing framework PyTorch: 1.8.0a0+1606899\r\nError while converting the model: You have to specify either decoder_input_ids or decoder_inputs_embeds\r\n```\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,619
1,619
NONE
null
It seems like it's not available now, I got this error: `Error while converting the model: Unrecognized configuration class <class 'transformers.configuration_pegasus.PegasusConfig'> for this kind of AutoModel: AutoModel. Model type should be one of RetriBertConfig, T5Config, DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaConfig, BartConfig, LongformerConfig, RobertaConfig, LayoutLMConfig, SqueezeBertConfig, BertConfig, OpenAIGPTConfig, GPT2Config, MobileBertConfig, TransfoXLConfig, XLNetConfig, FlaubertConfig, FSMTConfig, XLMConfig, CTRLConfig, ElectraConfig, ReformerConfig, FunnelConfig, LxmertConfig, BertGenerationConfig, DebertaConfig, DPRConfig, XLMProphetNetConfig, ProphetNetConfig.` Which is fair since pegasus is a new addition. Is it something the team plans to do soon? Or can someone point me some resources on if there are other ways to export a pre-trained model from huggingface? I'm pretty new to the machine learning thing :p Thanks all!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8225/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8225/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8224
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8224/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8224/comments
https://api.github.com/repos/huggingface/transformers/issues/8224/events
https://github.com/huggingface/transformers/pull/8224
734,053,593
MDExOlB1bGxSZXF1ZXN0NTEzNjg2NTk3
8,224
Add encoder-decoder word embeddings tying by default
{ "login": "alexyalunin", "id": 23011284, "node_id": "MDQ6VXNlcjIzMDExMjg0", "avatar_url": "https://avatars.githubusercontent.com/u/23011284?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexyalunin", "html_url": "https://github.com/alexyalunin", "followers_url": "https://api.github.com/users/alexyalunin/followers", "following_url": "https://api.github.com/users/alexyalunin/following{/other_user}", "gists_url": "https://api.github.com/users/alexyalunin/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexyalunin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexyalunin/subscriptions", "organizations_url": "https://api.github.com/users/alexyalunin/orgs", "repos_url": "https://api.github.com/users/alexyalunin/repos", "events_url": "https://api.github.com/users/alexyalunin/events{/privacy}", "received_events_url": "https://api.github.com/users/alexyalunin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@alexyalunin - this is a great PR, thanks a lot! In general the function does exactly what I had in mind :-) I added some changes that I'd suggest we apply.\r\n\r\nAlso it would be great if we could add a test analogues to this one: https://github.com/huggingface/transformers/blob/93354bc7790ecf768690745db2407b7542264304/tests/test_modeling_encoder_decoder.py#L306 . \r\n\r\nIf you have any questions or need help, let me know! \r\n\r\nLooking forward to merge this soon ", "> Thanks for the PR! Though I think the name of this option is kinda confusing, I can't think of a better one :)\r\n\r\nI'm fine with the name", "@patrickvonplaten finally I have found some time to finish this PR. I couldn't finish tests, you see I tried to initialize EncoderDecoder with a model and its copy and tie word embeddings, it seems like they are tied (I check by looking at `model.named_parameters()`, i.e these parameters do not have `decoder.word_embeddings`), but when I save the model and load it `decoder.word_embeddings` now appear in `model.named_parameters()`. I trained such a model for my project and after few epochs word embs for encoder and decoder are the same but they both appear in `model.named_parameters()`. Pls take a look. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,619
1,619
NONE
null
As discussed in #8158 config has `tie_encoder_decoder_word_embeds =True` parameter. `_tie_encoder_decoder_word_embeddings ` is called when Encoder Decoder model is initialized, if sizes are the same, the encoder word embedding matrix is assigned to decoder one, this may cause unexpected behavior if e.g. user chooses to init model with bert and gpt and their vocabs have same sizes but different words. @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8224/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8224", "html_url": "https://github.com/huggingface/transformers/pull/8224", "diff_url": "https://github.com/huggingface/transformers/pull/8224.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8224.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8223
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8223/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8223/comments
https://api.github.com/repos/huggingface/transformers/issues/8223/events
https://github.com/huggingface/transformers/pull/8223
734,024,431
MDExOlB1bGxSZXF1ZXN0NTEzNjY0NjYx
8,223
Create README.md
{ "login": "yfpeng", "id": 2766437, "node_id": "MDQ6VXNlcjI3NjY0Mzc=", "avatar_url": "https://avatars.githubusercontent.com/u/2766437?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yfpeng", "html_url": "https://github.com/yfpeng", "followers_url": "https://api.github.com/users/yfpeng/followers", "following_url": "https://api.github.com/users/yfpeng/following{/other_user}", "gists_url": "https://api.github.com/users/yfpeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/yfpeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yfpeng/subscriptions", "organizations_url": "https://api.github.com/users/yfpeng/orgs", "repos_url": "https://api.github.com/users/yfpeng/repos", "events_url": "https://api.github.com/users/yfpeng/events{/privacy}", "received_events_url": "https://api.github.com/users/yfpeng/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8223/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8223", "html_url": "https://github.com/huggingface/transformers/pull/8223", "diff_url": "https://github.com/huggingface/transformers/pull/8223.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8223.patch", "merged_at": 1604563400000 }
https://api.github.com/repos/huggingface/transformers/issues/8222
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8222/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8222/comments
https://api.github.com/repos/huggingface/transformers/issues/8222/events
https://github.com/huggingface/transformers/issues/8222
734,004,431
MDU6SXNzdWU3MzQwMDQ0MzE=
8,222
Why is the accuracy rate of pre-trained GPT-2 model only ~26%?
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, we try to keep the github issues for bugs only. Could you open a thread on the [forum](https://discuss.huggingface.co) instead? Thank you!" ]
1,604
1,604
1,604
NONE
null
Hello, I have been trying to analyze HellaSwag dataset with the pre-trained GPT2DoubleHeadsModel. I fine-tuned the model by disabling any change fin weights of the main body (12 layers + embedding layer) while training weights of the multiple-choice head with moderate learning rate. My understanding is that, since the main body of the model is already pre-trained, I should get a reasonably high accuracy rate for the HellaSwag task as long as I do a good job in training the weights from the multiple-choice head. However, the accuracy rate of the pre-trained GPT2DoubleHeadsModel on the HellaSwag task is only ~26% (although my training loss is only ~1.40). Why is my accuracy rate so low? is this because I am not fine-tuning the weights of the main body of the model during the training? Any advice would be highly appreciated. Thank you,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8222/timeline
completed
null
null