url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
βŒ€
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
βŒ€
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/3712
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3712/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3712/comments
https://api.github.com/repos/huggingface/transformers/issues/3712/events
https://github.com/huggingface/transformers/issues/3712
597,074,840
MDU6SXNzdWU1OTcwNzQ4NDA=
3,712
Text Generation with XLNet is very Slow
{ "login": "urlocal12", "id": 61215920, "node_id": "MDQ6VXNlcjYxMjE1OTIw", "avatar_url": "https://avatars.githubusercontent.com/u/61215920?v=4", "gravatar_id": "", "url": "https://api.github.com/users/urlocal12", "html_url": "https://github.com/urlocal12", "followers_url": "https://api.github.com/users/urlocal12/followers", "following_url": "https://api.github.com/users/urlocal12/following{/other_user}", "gists_url": "https://api.github.com/users/urlocal12/gists{/gist_id}", "starred_url": "https://api.github.com/users/urlocal12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/urlocal12/subscriptions", "organizations_url": "https://api.github.com/users/urlocal12/orgs", "repos_url": "https://api.github.com/users/urlocal12/repos", "events_url": "https://api.github.com/users/urlocal12/events{/privacy}", "received_events_url": "https://api.github.com/users/urlocal12/received_events", "type": "User", "site_admin": false }
[ { "id": 1834059054, "node_id": "MDU6TGFiZWwxODM0MDU5MDU0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Generation", "name": "Ex: Generation", "color": "06EFF8", "default": false, "description": "Natural Language Generation" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Reopened the previous issue: https://github.com/huggingface/transformers/issues/789 and will take a look next week :-) ", "Closing this as well. Reasons are explained in #789." ]
1,586
1,591
1,591
NONE
null
Using the run_generation script to generate text with XLNet is currently extremely slow compared to GPT-2 as mentioned in [this issue](https://github.com/huggingface/transformers/issues/789) > To generate 100 tokens, XLNet takes **3m22s** while GPT-2 takes **14s**. And it grows exponentially : for 500 tokens, XLNet takes **51m46s** while GPT-2 takes **2m52s**. More information on why this might be happening is included in the issue. However, it was closed before it could be resolved. If anyone could look into this and maybe reopen the original issue, that would be greatly appreciated. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3712/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3712/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3711
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3711/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3711/comments
https://api.github.com/repos/huggingface/transformers/issues/3711/events
https://github.com/huggingface/transformers/issues/3711
597,065,876
MDU6SXNzdWU1OTcwNjU4NzY=
3,711
TransfoXLLMHead doesn't shift labels internally when called for loss
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[ { "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false } ]
[]
1,586
1,586
1,586
CONTRIBUTOR
null
# πŸ› Bug When called with labels to get the language-modeling loss, `TransfoXLLMHead.forward` computes the NLLLoss of the outputs directly against the labels, rather than against the shifted labels like the documentation indicates (and like the other models). This makes it impossible to train with `lm_labels = input_ids` as suggested by the doc. ## Information Model I am using: TransformerXL Language I am using the model on: English The problem arises when using: * [x] my own modified scripts: The task I am working on is: * [x] my own task or dataset: ## To reproduce ``` import torch from transformers import TransfoXLConfig, TransfoXLLMHeadModel config = TransfoXLConfig() lm = TransfoXLLMHeadModel(config) test_tensor = torch.LongTensor([[0]]) print(lm(input_ids=test_tensor, labels=test_tensor)[0]) ``` A 1x1 loss tensor is returned. ## Expected behavior As there is only 1 token in the input tensor, no loss should be returned: there's no next label to compare the output against. For example, running this with GPT2 ``` import torch from transformers import GPT2Config, GPT2LMHeadModel config = GPT2Config() lm = GPT2LMHeadModel(config) test_tensor = torch.LongTensor([[0]]) print(lm(input_ids=test_tensor, labels=test_tensor)[0]) ``` returns `tensor(nan, grad_fn=<NllLossBackward>)`. ## Environment info - `transformers` version: 2.8.0 - Platform: Linux-5.3.0-45-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: False - Using distributed or parallel set-up in script?: False
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3711/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3711/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3710
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3710/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3710/comments
https://api.github.com/repos/huggingface/transformers/issues/3710/events
https://github.com/huggingface/transformers/issues/3710
597,032,449
MDU6SXNzdWU1OTcwMzI0NDk=
3,710
inconsistent tokenize output
{ "login": "michaelmoju", "id": 30719384, "node_id": "MDQ6VXNlcjMwNzE5Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/30719384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelmoju", "html_url": "https://github.com/michaelmoju", "followers_url": "https://api.github.com/users/michaelmoju/followers", "following_url": "https://api.github.com/users/michaelmoju/following{/other_user}", "gists_url": "https://api.github.com/users/michaelmoju/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelmoju/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelmoju/subscriptions", "organizations_url": "https://api.github.com/users/michaelmoju/orgs", "repos_url": "https://api.github.com/users/michaelmoju/repos", "events_url": "https://api.github.com/users/michaelmoju/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelmoju/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,592
1,592
NONE
null
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): Bert Language I am using the model on (English, Chinese ...): Chinese The problem arises when using: BertTokenizer I am using the BertModel. When predicting the result, it got inconsistent output (the output differs from time to time). It turns out that the tokenizer gets different output when the input string is "\n" or "\n\n". for example: <pre><code> from transformers.tokenization_bert import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-chinese') tokens = tokenizer.tokenize("\n\n") output: ['[SEP]'] or ['[PAD]'] or ['[CLS]'] </code></pre>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3710/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3710/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3709
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3709/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3709/comments
https://api.github.com/repos/huggingface/transformers/issues/3709/events
https://github.com/huggingface/transformers/pull/3709
596,979,337
MDExOlB1bGxSZXF1ZXN0NDAxMTc0Nzk5
3,709
Add model tag
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3709?src=pr&el=h1) Report\n> Merging [#3709](https://codecov.io/gh/huggingface/transformers/pull/3709?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6435b9f908e7361330db89e263a65b0a58060d11&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3709/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3709?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3709 +/- ##\n==========================================\n- Coverage 78.13% 78.12% -0.01% \n==========================================\n Files 104 104 \n Lines 17723 17723 \n==========================================\n- Hits 13847 13846 -1 \n- Misses 3876 3877 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3709?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3709/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.79% <0.00%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3709?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3709?src=pr&el=footer). Last update [6435b9f...be477cf](https://codecov.io/gh/huggingface/transformers/pull/3709?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,586
1,587
1,587
CONTRIBUTOR
null
Add model tag to be correctly indexed while working on card description
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3709/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3709", "html_url": "https://github.com/huggingface/transformers/pull/3709", "diff_url": "https://github.com/huggingface/transformers/pull/3709.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3709.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/3708
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3708/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3708/comments
https://api.github.com/repos/huggingface/transformers/issues/3708/events
https://github.com/huggingface/transformers/pull/3708
596,978,848
MDExOlB1bGxSZXF1ZXN0NDAxMTc0NDAy
3,708
Add model tag
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,586
1,587
1,587
CONTRIBUTOR
null
Add model tag to be correctly indexed while working on description
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3708/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3708/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3708", "html_url": "https://github.com/huggingface/transformers/pull/3708", "diff_url": "https://github.com/huggingface/transformers/pull/3708.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3708.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/3707
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3707/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3707/comments
https://api.github.com/repos/huggingface/transformers/issues/3707/events
https://github.com/huggingface/transformers/issues/3707
596,881,489
MDU6SXNzdWU1OTY4ODE0ODk=
3,707
Distributed training on multiple GPU nodes is slower than on single GPU node
{ "login": "YingleiZhang", "id": 9091841, "node_id": "MDQ6VXNlcjkwOTE4NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/9091841?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YingleiZhang", "html_url": "https://github.com/YingleiZhang", "followers_url": "https://api.github.com/users/YingleiZhang/followers", "following_url": "https://api.github.com/users/YingleiZhang/following{/other_user}", "gists_url": "https://api.github.com/users/YingleiZhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/YingleiZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YingleiZhang/subscriptions", "organizations_url": "https://api.github.com/users/YingleiZhang/orgs", "repos_url": "https://api.github.com/users/YingleiZhang/repos", "events_url": "https://api.github.com/users/YingleiZhang/events{/privacy}", "received_events_url": "https://api.github.com/users/YingleiZhang/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false } ]
[ "Are those `p3.16xlarge` is the same AZ? Even in the same AZ, the throughput and latency between machines might be the bottleneck here.", "i.e. you usually need infiniband between cluster machines. But @mfuntowicz knows about this stuff way better than I do...", "Thanks for your reply. The two hosts are indeed in the same AZ. \r\n\r\nHere is how i run them:\r\n```\r\n# Node 1:\r\npython3.6 -m torch.distributed.launch \\\r\n--nproc_per_node=8 \\\r\n--nnodes=2 \\\r\n--node_rank=0 \\\r\n--master_addr=\"10.0.0.83\" \\\r\n--master_port=12345 \\\r\nrun_lm_finetuning.py \\\r\n --output_dir=output \\\r\n --model_type=roberta \\\r\n --model_name_or_path=roberta-base \\\r\n --do_train \\\r\n --train_data_file=$TRAIN_FILE \\\r\n --do_eval \\\r\n --eval_data_file=$TEST_FILE \\\r\n --mlm\r\n\r\n# Node 2 \r\npython3.6 -m torch.distributed.launch \\\r\n--nproc_per_node=8 \\\r\n--nnodes=2 \\\r\n--node_rank=1 \\\r\n--master_addr=\"10.0.0.83\" \\\r\n--master_port=12345 \\\r\nrun_lm_finetuning.py \\\r\n --output_dir=output \\\r\n --model_type=roberta \\\r\n --model_name_or_path=roberta-base \\\r\n --do_train \\\r\n --train_data_file=$TRAIN_FILE \\\r\n --do_eval \\\r\n --eval_data_file=$TEST_FILE \\\r\n --mlm\r\n```\r\nI also have the nccl debug info here:\r\nThe first part is on multiple nodes, where the training is slow. The second part is on single node, and the training is fast. I can definitely see that on single node, there are many Channels, which can't be found on multiple node. \r\nhttps://gist.github.com/YingleiZhang/a8df48eb534ba20ff8f26b5309094b55\r\n\r\nI was also suspecting that i might need high speed connections (link infiniband) between cluster machines, but in this case, would MPI help? My PyTorch did not built with MPI yet. ", "I think the bandwidth between two different nodes are indeed the problem. Consider the amount of data (Our estimation is about 8G for each step) we need to move between different nodes, and the bandwidth for intra-gpu communication is much higher than that of inter-node communication. NCCL folks mentioned that this could be 120 GB/s vs 10 GB/s for all reduce operation. (See it here https://github.com/NVIDIA/nccl/issues/318)\r\n\r\nI am closing this issue here. Thanks for the help. ", "Thanks for investigating.\r\n\r\nAlso from your logs @mfuntowicz was saying that it looks like NCCL does not use [AWS EFA](https://aws.amazon.com/hpc/efa/) – maybe something to investigate there." ]
1,586
1,586
1,586
NONE
null
Our team uses pre-trained model and transformers has been a great help. While trying to benchmark the training speed of our computing infrastructure, we were running this example: https://github.com/huggingface/transformers/blob/v2.3.0/examples/run_lm_finetuning.py We found that on a single p3.16xlarge GPU instance, DDP would took about 36:33 to train wikitext-103-raw. However, once we move on to two p3.16xlarge GPU instance, it would take more time (1:45:49) for the same dataset. I would like to know what could possibly cause this? There are something i am suspecting: 1. The synchronization of gradiants/parameters between different process 2. The optimizer that has been used in this scripts. Any help is appreciated. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3707/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3706
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3706/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3706/comments
https://api.github.com/repos/huggingface/transformers/issues/3706/events
https://github.com/huggingface/transformers/pull/3706
596,864,202
MDExOlB1bGxSZXF1ZXN0NDAxMDgxNTcy
3,706
Cleanup fast tokenizers integration
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3706?src=pr&el=h1) Report\n> Merging [#3706](https://codecov.io/gh/huggingface/transformers/pull/3706?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f0c96fafd16d206b22a74fe76b251414f7314703&el=desc) will **decrease** coverage by `0.82%`.\n> The diff coverage is `90.64%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3706/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3706?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3706 +/- ##\n==========================================\n- Coverage 78.47% 77.65% -0.83% \n==========================================\n Files 106 106 \n Lines 17930 17904 -26 \n==========================================\n- Hits 14071 13903 -168 \n- Misses 3859 4001 +142 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3706?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `89.42% <ΓΈ> (-0.20%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `95.29% <ΓΈ> (-0.04%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert\\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9qYXBhbmVzZS5weQ==) | `67.07% <ΓΈ> (-0.79%)` | :arrow_down: |\n| [src/transformers/tokenization\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `36.25% <ΓΈ> (+0.88%)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <ΓΈ> (-0.08%)` | :arrow_down: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.82% <ΓΈ> (-0.05%)` | :arrow_down: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.71% <ΓΈ> (-0.12%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <ΓΈ> (-0.29%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `83.33% <ΓΈ> (-0.14%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `97.61% <ΓΈ> (-0.06%)` | :arrow_down: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3706?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3706?src=pr&el=footer). Last update [f0c96fa...5b54450](https://codecov.io/gh/huggingface/transformers/pull/3706?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Ok here is a new version @mfuntowicz.\r\nI'll add some tests later before merging." ]
1,586
1,587
1,587
MEMBER
null
First PR to clean up the fast tokenizers integration.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3706/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3706/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3706", "html_url": "https://github.com/huggingface/transformers/pull/3706", "diff_url": "https://github.com/huggingface/transformers/pull/3706.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3706.patch", "merged_at": 1587210238000 }
https://api.github.com/repos/huggingface/transformers/issues/3705
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3705/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3705/comments
https://api.github.com/repos/huggingface/transformers/issues/3705/events
https://github.com/huggingface/transformers/pull/3705
596,842,427
MDExOlB1bGxSZXF1ZXN0NDAxMDY1MjIx
3,705
Update tokenizers to 0.7.0-rc5
{ "login": "n1t0", "id": 1217986, "node_id": "MDQ6VXNlcjEyMTc5ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/n1t0", "html_url": "https://github.com/n1t0", "followers_url": "https://api.github.com/users/n1t0/followers", "following_url": "https://api.github.com/users/n1t0/following{/other_user}", "gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}", "starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/n1t0/subscriptions", "organizations_url": "https://api.github.com/users/n1t0/orgs", "repos_url": "https://api.github.com/users/n1t0/repos", "events_url": "https://api.github.com/users/n1t0/events{/privacy}", "received_events_url": "https://api.github.com/users/n1t0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3705?src=pr&el=h1) Report\n> Merging [#3705](https://codecov.io/gh/huggingface/transformers/pull/3705?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc65afc4dfac3badf3de3be395d4023b44c61bdd&el=desc) will **decrease** coverage by `0.11%`.\n> The diff coverage is `50.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3705/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3705?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3705 +/- ##\n==========================================\n- Coverage 78.14% 78.02% -0.12% \n==========================================\n Files 104 104 \n Lines 17723 17710 -13 \n==========================================\n- Hits 13849 13818 -31 \n- Misses 3874 3892 +18 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3705?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.50% <0.00%> (ΓΈ)` | |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <100.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `85.76% <0.00%> (-1.77%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.32% <0.00%> (-0.58%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.07% <0.00%> (-0.47%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.96% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.84% <0.00%> (-0.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `85.53% <0.00%> (-0.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `90.40% <0.00%> (-0.03%)` | :arrow_down: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3705?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3705?src=pr&el=footer). Last update [bc65afc...e31554a](https://codecov.io/gh/huggingface/transformers/pull/3705?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,586
1,586
1,586
MEMBER
null
This includes some bug fix (around added tokens), and a small breaking change.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3705/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3705", "html_url": "https://github.com/huggingface/transformers/pull/3705", "diff_url": "https://github.com/huggingface/transformers/pull/3705.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3705.patch", "merged_at": 1586543029000 }
https://api.github.com/repos/huggingface/transformers/issues/3704
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3704/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3704/comments
https://api.github.com/repos/huggingface/transformers/issues/3704/events
https://github.com/huggingface/transformers/issues/3704
596,837,916
MDU6SXNzdWU1OTY4Mzc5MTY=
3,704
Queries about the Notation and Model training of T5 and ELECTRA sentiment classification.
{ "login": "innat", "id": 17668390, "node_id": "MDQ6VXNlcjE3NjY4Mzkw", "avatar_url": "https://avatars.githubusercontent.com/u/17668390?v=4", "gravatar_id": "", "url": "https://api.github.com/users/innat", "html_url": "https://github.com/innat", "followers_url": "https://api.github.com/users/innat/followers", "following_url": "https://api.github.com/users/innat/following{/other_user}", "gists_url": "https://api.github.com/users/innat/gists{/gist_id}", "starred_url": "https://api.github.com/users/innat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/innat/subscriptions", "organizations_url": "https://api.github.com/users/innat/orgs", "repos_url": "https://api.github.com/users/innat/repos", "events_url": "https://api.github.com/users/innat/events{/privacy}", "received_events_url": "https://api.github.com/users/innat/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi! \r\n\r\n- 1 - casing is the difference between lowercasing and uppercasing. Uncased models do not handle uppercase letters, and therefore lowercase them:\r\n\r\n```py\r\nfrom transformers import AutoTokenizer\r\n\r\nuncased_tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\r\ncased_tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\n\r\nprint(uncased_tokenizer.tokenize(\"Hi, this is Lysandre\"))\r\n# ['hi', ',', 'this', 'is', 'l', '##ys', '##and', '##re'] <-- notice how uppercase letters are now lowercased\r\n\r\nprint(cased_tokenizer.tokenize(\"Hi, this is Lysandre\"))\r\n# ['Hi', ',', 'this', 'is', 'L', '##ys', '##and', '##re']\r\n```\r\n\r\n- 2 - These should be clarified with model cards on the [model hub](https://huggingface.co/models) but we haven't gotten to changing them yet. \r\n\r\nXLM models are usually multilingual, which is the case for those you mentioned: `ende` means english-german, `enfr`, english-french, `xnli15` means the 15 languages that are used in [XNLI](https://www.nyu.edu/projects/bowman/xnli/).\r\n\r\nThe following number is the hidden size, e.g. `1024` means that the hidden size of the model is 1024.\r\n\r\n- 3 - You may useT5 for sentiment classification, ELECTRA as well but with a bit more additional work. \r\n\r\nAs @craffel said in the issue you mentioned, T5 was trained with SST-2 so should work out-of-the-box if you follow what he mentioned in this issue.\r\n\r\nThere is no current `ElectraForSequenceClassification` as ELECTRA is so new, but it will certainly make its way in the library in the coming weeks! Once this head is here (feel free to add it yourself, it would be as easy as copying one head from one other modeling file and putting it for ELECTTRA), ELECTRA can be used for sentiment classification, but it would require you to fine-tune it first to a sentiment classification dataset (like the SST-2 dataset). \r\n\r\nIf you're looking at easy sentiment classification, please take a look at the pipelines and at the [already-finetuned sequence classification models](https://huggingface.co/models?filter=text-classification) and look for sentiment classification especially.", "@LysandreJik thanks, it was helpful πŸ™‚", "Hi, it is easy to use the pre-trained T5 models for sentiment ID. You could do something like\r\n```Python\r\nMODEL_NAME = \"t5-base\"\r\nmodel = transformers.T5ForConditionalGeneration.from_pretrained(MODEL_NAME)\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(MODEL_NAME)\r\ninput_text = \"sst2 sentence: This movie was great! I loved the acting.\"\r\ninputs = tokenizer.encode_plus(input_text, return_token_type_ids=False, return_tensors=\"pt\")\r\nprint(tokenizer.decode(model.generate(**inputs)[0]))\r\ninput_text = \"sst2 sentence: The acting was so bad in this movie I left immediately.\"\r\ninputs = tokenizer.encode_plus(input_text, return_token_type_ids=False, return_tensors=\"pt\")\r\nprint(tokenizer.decode(model.generate(**inputs)[0]))\r\n```\r\nThe `\"sst2 sentence:\"` prefix is what we used for the SST-2 task. It is a sentiment ID task. The model needs to see this prefix to know what task you want it to undertake.", "Hi, @craffel Thank for your quick response and the intuitive code snippet. As I said, I am trying to implement **T5** for a `binary sentiment classification` task (label as `1` and `0`). So, if I want to use **T5**, I've to treat my task as a **text-to-text**, in other words, `positive` and `negative`. But I feel a bit confused, if I have the following scenario how should I approach. \r\n\r\n## Model loading\r\n```python\r\nMODEL_NAME = \"t5-base\"\r\ntransformer_layer = transformers.T5ForConditionalGeneration.from_pretrained(MODEL_NAME)\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(MODEL_NAME)\r\n```\r\n\r\n## A general encoder \r\n```python\r\ndef regular_encode(texts, tokenizer, maxlen=512):\r\n enc_di = tokenizer.batch_encode_plus(\r\n texts, \r\n return_attention_masks=False, \r\n return_token_type_ids=False,\r\n pad_to_max_length=True,\r\n max_length=maxlen\r\n )\r\n return np.array(enc_di['input_ids'])\r\n```\r\n\r\n## Build the model (as per my task)\r\n```python\r\ndef build_model(transformer, max_len=190):\r\n input_word_ids = Input(shape=(max_len,), dtype=tf.int32)\r\n sequence_output = transformer(input_word_ids)[0]\r\n cls_token = sequence_output[:, 0, :]\r\n out = Dense(1, activation='sigmoid')(cls_token)\r\n model = Model(inputs=input_word_ids, outputs=out)\r\n model.compile(Adam(lr=1e-5), loss='binary_crossentropy', metrics=['accuracy'])\r\n\r\n return model\r\n```\r\n\r\n## Tokenized the data and grab the targets int(1,0)\r\n```python\r\nx_train = regular_encode(data.text, tokenizer, maxlen=190)\r\ny_train = data.target.values # (0, 1)\r\nmodel = build_model(transformer_layer, max_len=190)\r\nmodel.fit...\r\nmodel.predict...\r\n```\r\nI sure I'm missing something crucial part that is not considering `text-to-text` manner. If I convert `1` and `0` of labels as `Positive` and `Negative`...I mean shouldn't the target need to be numeric! And about the prefix, `sst2 sentence:` so, this is, in other words, is a string indicator to inform the model about the goal or task. So, do I have to add this string at the beginning of every text sentence or (samples)?", "> I sure I'm missing something crucial part that is not considering text-to-text manner. If I convert 1 and 0 of labels as Positive and Negative...I mean shouldn't the target need to be numeric!\r\n\r\nNo, the target should *always* be text for T5. You should map your 0/1 labels to the words \"negative\" and \"positive\" and fine-tune T5 to predict those words, and then map them back to 0/1 after the model outputs the text if needed. This is the point of the text-to-text framework - all tasks take text as input and produce text as output. So, for example, your \"build model\" code should not include a dense layer with a sigmoid output, etc. There is no modification to the model structure necessary whatsoever.\r\n\r\n> And about the prefix, sst2 sentence: so, this is, in other words, is a string indicator to inform the model about the goal or task. So, do I have to add this string at the beginning of every text sentence or (samples)?\r\n\r\nYes, that is the intention.", "@LysandreJik @craffel \r\nPlease check this issue!\r\nAs per the discussion I have a similar approach on binary classification on the text. But it seems that I am doing something wrong. I have also converted the target 0 and 1 to \"0\" and \"1\". Don't know where I am getting wrong.\r\n```\r\nMODEL_NAME = \"t5-base\"\r\ntransformer_layer = transformers.TFT5ForConditionalGeneration.from_pretrained(MODEL_NAME)\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(MODEL_NAME)\r\n```\r\n\r\n```\r\ndef regular_encode(texts, tokenizer, maxlen=512):\r\n enc_di = tokenizer.batch_encode_plus(\r\n texts, \r\n return_attention_masks=False, \r\n return_token_type_ids=False,\r\n pad_to_max_length=True,\r\n max_length=maxlen\r\n )\r\n return np.array(enc_di['input_ids'])\r\n```\r\n```\r\ndef build_model(transformer, max_len=190):\r\n input_word_ids = Input(shape=(max_len,), dtype=tf.int32)\r\n sequence_output = transformer(input_word_ids)[0]\r\n cls_token = sequence_output[:, 0, :]\r\n out = Dense(1, activation='sigmoid')(cls_token)\r\n model = Model(inputs=input_word_ids, outputs=out)\r\n model.compile(Adam(lr=1e-5), loss='binary_crossentropy', metrics=['accuracy'])\r\n\r\n return model\r\n\r\n```\r\n\r\n\r\n```\r\nx_train = regular_encode(train_df.new_text, tokenizer, maxlen=190)\r\ny_train = train_df.target.values # (0, 1) 0 and 1 convert to string\r\nmodel = build_model(transformer_layer, max_len=190)\r\n```\r\n```\r\nValueError: in converted code:\r\n\r\n /opt/conda/lib/python3.6/site-packages/transformers/modeling_tf_t5.py:854 call *\r\n encoder_outputs = self.encoder(\r\n /opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py:822 __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n /opt/conda/lib/python3.6/site-packages/transformers/modeling_tf_t5.py:445 call\r\n raise ValueError(\"You have to specify either input_ids or inputs_embeds\")\r\n\r\n ValueError: You have to specify either input_ids or inputs_embeds\r\n```\r\nAll inputs are converted to this format\r\n\"sst2 sentence: our deeds are the reason for this...\"\r\nI used the same things but having trouble with this error. I need to fine-tune the model on my custom dataset.", "Hi @vapyc, this seems to be an unrelated issue. Would you mind opening a new issue? When you do, would it be possible for you to show the entire stack trace, e.g. the line where it fails in your code, alongside all the information you've provided here? Thanks.", "@LysandreJik I'd be very interested in an `ElectraForSequenceClassification` head, as I'm not confident I could implement it myself since I'm quite new to Transformers and still learning how the library is organized. Any chance this is coming soon?", "i just posted a pull request ... was super simple to get it working\r\n\r\nhttps://github.com/huggingface/transformers/pull/4257", "@liuzzi awesome! I look forward to trying it out.", "@liuzzi wonderful, thanks a lot. Well done brother. Can you share a working notebook on this, please? Thank you.", "@innat i did not use a notebook to fine-tune, but for sentiment analysis you can just use the run_glue.py script with the SST-2 task which is a binary sentiment analysis task. You shouldn't even need to change any code, just make sure your dataset follows the format of SST-2.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,594
1,594
NONE
null
I have a few questions about the model notation. And also short info about T5 and ELECTRA. I would like to make separate issues but things are not too complex. I mainly working on CV, so sorry if I being so silly. ### 1 Cased or Uncased What is mean by cased and uncased? ``` bert-base-uncased bert-base-cased ``` ### 2 Suffix I was trying to run the XLM model but in the pre-train model, I've found the following weights, I understood about XML-MLM but couldn't get the rest of the part, ex: `enfr-1024, enro-1024` etc. ``` xlm-mlm-enfr-1024 xlm-mlm-enro-1024 xlm-mlm-tlm-xnli15-1024 ``` ### 3 Sentiment Analysis using T5 and ELECTRA Is it possible to use these two models for sentiment classification, simply just a binary classification? How can we implement these two transformers? I have a high-level overview of T5, it transforms both (input/target) as a text. I [found it](https://github.com/google-research/text-to-text-transfer-transformer/issues/109) useful though but bit trouble to implement. Using transformers, is it possible to go within a convenient way?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3704/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3703
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3703/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3703/comments
https://api.github.com/repos/huggingface/transformers/issues/3703/events
https://github.com/huggingface/transformers/pull/3703
596,790,811
MDExOlB1bGxSZXF1ZXN0NDAxMDIyNzA3
3,703
Token-level regression mode added in ForTokenClassification models
{ "login": "gsarti", "id": 16674069, "node_id": "MDQ6VXNlcjE2Njc0MDY5", "avatar_url": "https://avatars.githubusercontent.com/u/16674069?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gsarti", "html_url": "https://github.com/gsarti", "followers_url": "https://api.github.com/users/gsarti/followers", "following_url": "https://api.github.com/users/gsarti/following{/other_user}", "gists_url": "https://api.github.com/users/gsarti/gists{/gist_id}", "starred_url": "https://api.github.com/users/gsarti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gsarti/subscriptions", "organizations_url": "https://api.github.com/users/gsarti/orgs", "repos_url": "https://api.github.com/users/gsarti/repos", "events_url": "https://api.github.com/users/gsarti/events{/privacy}", "received_events_url": "https://api.github.com/users/gsarti/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3703?src=pr&el=h1) Report\n> Merging [#3703](https://codecov.io/gh/huggingface/transformers/pull/3703?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6435b9f908e7361330db89e263a65b0a58060d11&el=desc) will **decrease** coverage by `0.98%`.\n> The diff coverage is `59.37%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3703/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3703?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3703 +/- ##\n==========================================\n- Coverage 78.13% 77.14% -0.99% \n==========================================\n Files 104 104 \n Lines 17723 17752 +29 \n==========================================\n- Hits 13847 13695 -152 \n- Misses 3876 4057 +181 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3703?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `74.50% <0.00%> (-0.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.33% <50.00%> (-2.45%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.88% <66.66%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.04% <66.66%> (-10.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.78% <66.66%> (-0.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.02% <70.00%> (-0.57%)` | :arrow_down: |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `72.38% <75.00%> (-0.27%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.61% <0.00%> (-2.64%)` | :arrow_down: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3703?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3703?src=pr&el=footer). Last update [6435b9f...c6692e6](https://codecov.io/gh/huggingface/transformers/pull/3703?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "LGTM but I'll let others chime in.", "For ELECTRA, the discrepancy should have been fixed by 500aa12318ce5acd289d5edb6cb8266b3c3b162e, so can you propagate your changes there too?", "Related to naming, as it's for the `XXXForTokenClassification` when you have a single label wouldn't you expect to get a cross-entropy loss such as [binary cross-entropy](https://pytorch.org/docs/stable/nn.html#bceloss), rather than regression? Seeing as it's a classification model?", "I'd say the same would apply to `XXXForSentenceClassification`, right? It would probably be best to decouple regression and classification for those two classes instead of having regression as a special case, and make the behavior for `num_labels == 1` the same as `num_labels == 2` for labels that are not one-hot encoded.\r\n\r\nProbably worth a dedicated PR to decouple both instead of handling it here!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Any chance of blowing new life into this? Token classification is particularly familiar due to NER, but in many research fields (e.g. psycholinguistic studies) we are interested in a lot more than that. Continuous values for tokens are very common there. I'd love to see regression and multilabel classification for token classification models.", "Sad to see this die :(" ]
1,586
1,668
1,594
CONTRIBUTOR
null
This is related to the issue #3646 I opened 2 days ago and which was considered interesting by @julien-c I added the **support for token-level regression in Bert, Roberta, Albert, XLNet, XLM, DistilBert and in the template for adding a new model** when `self.num_labels == 1`, fixing the docstring to match the new changes (and correcting the one for XLNetForTokenClassification which was copied from the XLNForMultipleChoice one). Given two different approaches to compute `active_labels`, I privileged the one used in more recent models (e.g. Albert). The change was tested on tests and examples (without RUN_SLOW and RUN_CUSTOM_TOKENIZERS though) and passed all tests. I didn't felt like adding this to Electra since its TokenClassification implementation seems to be missing the `num_labels` variable for some reason. I didn't add new tests since the sentence regression case wasn't covered by the testing suite either. **Edit:** After the fix on `ElectraForTokenClassification`'s `num_labels` attribute, I added support for token-level regression there too.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3703/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3703/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3703", "html_url": "https://github.com/huggingface/transformers/pull/3703", "diff_url": "https://github.com/huggingface/transformers/pull/3703.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3703.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/3702
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3702/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3702/comments
https://api.github.com/repos/huggingface/transformers/issues/3702/events
https://github.com/huggingface/transformers/pull/3702
596,734,349
MDExOlB1bGxSZXF1ZXN0NDAwOTc2Mzgz
3,702
Add `run_glue_tpu.py` that trains models on TPUs
{ "login": "jysohn23", "id": 19496130, "node_id": "MDQ6VXNlcjE5NDk2MTMw", "avatar_url": "https://avatars.githubusercontent.com/u/19496130?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jysohn23", "html_url": "https://github.com/jysohn23", "followers_url": "https://api.github.com/users/jysohn23/followers", "following_url": "https://api.github.com/users/jysohn23/following{/other_user}", "gists_url": "https://api.github.com/users/jysohn23/gists{/gist_id}", "starred_url": "https://api.github.com/users/jysohn23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jysohn23/subscriptions", "organizations_url": "https://api.github.com/users/jysohn23/orgs", "repos_url": "https://api.github.com/users/jysohn23/repos", "events_url": "https://api.github.com/users/jysohn23/events{/privacy}", "received_events_url": "https://api.github.com/users/jysohn23/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3702?src=pr&el=h1) Report\n> Merging [#3702](https://codecov.io/gh/huggingface/transformers/pull/3702?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f68d22850ced09bb194b30068ff94ca3409f0879&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `41.66%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3702/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3702?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3702 +/- ##\n==========================================\n- Coverage 78.06% 78.03% -0.03% \n==========================================\n Files 100 100 \n Lines 17134 17144 +10 \n==========================================\n+ Hits 13375 13378 +3 \n- Misses 3759 3766 +7 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3702?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3702/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.30% <36.36%> (-0.81%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3702/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `97.01% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3702/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.45% <0.00%> (ΓΈ)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3702?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3702?src=pr&el=footer). Last update [f68d228...6e959fd](https://codecov.io/gh/huggingface/transformers/pull/3702?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Hmm don't worry about me, this shouldn't be a problem – feel free to merge if ready @LysandreJik !", "@jysohn23 I'm trying to run a variant of `run_glue_tpu.py` on TPUs and am stuck at an oom error. The first iteration of the below [for loop](https://github.com/jysohn23/transformers/blob/tpu/examples/run_tpu_glue.py#L150) runs fine, but it breaks on the second one. Any pointers on how to fix this?\r\n```\r\ntrain_dataloader = pl.ParallelLoader(dataloader, [args.device]).per_device_loader(args.device)\r\nepoch_iterator = tqdm(train_dataloader, desc=\"Iteration\", total=len(dataloader), disable=disable_logging)\r\nfor step, batch in enumerate(epoch_iterator):\r\n```\r\nI tried reducing the batch-size to 1 and running on a single core, both led to the same error. I'm using this `gcr.io/tpu-pytorch/xla:nightly_3.6` image for my experiments.\r\n\r\nfull log - shorturl.at/iswxR\r\nfew lines of the error log -\r\n```\r\n020-06-30 21:49:29.304998: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] >>> Dumping Computation 0 | 1/6136 [01:16<131:08:36, 76.95s/it]\r\n2020-06-30 21:49:29.305126: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] HloModule SyncTensorsGraph.33776, input_output_alias={ {0}: (250, {}), {1}: (249, {}), {2}: (265, {}), {3}: (248, {}), {4}: (247, {}), {5}: (246, {}), {6}: (245, {}), {7}: (244, {}), {8}: (269, {}), {9}: (243, {}), {10}: (242, {}), {11}: (241, {}), {12}: (240, {}), {13}: (239, {}), {14}: (271, {}), {15}: (238, {}), {16}: (237, {}), {17}: (236, {}), {18}: (235, {}), {19}: (234, {}), {20}: (273, {}), {21}: (233, {}), {22}: (232, {}), {23}: (231, {}), {24}: (230, {}), {25}: (229, {}), {26}: (274, {}), {27}: (228, {}), {28}: (227, {}), {29}: (226, {}), {30}: (225, {}), {31}: (224, {}), {32}: (276, {}), {33}: (223, {}), {34}: (222, {}), {35}: (221, {}), {36}: (220, {}), {37}: (219, {}), {38}: (277, {}), {39}: (218, {}), {40}: (217, {}), {41}: (216, {}), {42}: (215, {}), {43}: (214, {}), {44}: (279, {}), {45}: (213, {}), {46}: (212, {}), {47}: (211, {}), {48}: (210, {}), {49}: (209, {}), {50}: (280, {}), {51}: (208, {}), {52}: (207, {}), {53}: (206, {}), {54}: (205, {}), {55}: (204, {}), {56}: (282, {}), {57}: (203, {}), {58}: (202, {}), {59}: (201, {}), {60}: (200, {}), {61}: (199, {}), {62}: (283, {}), {63}: (198, {}), {64}: (197, {}), {65}: (196, {}), {66}: (195, {}), {67}: (194, {}), {68}: (285, {}), {69}: (193, {}), {70}: (192, {}), {71}: (191, {}), {72}: (190, {}), {73}: (189, {}), {74}: (286, {}), {75}: (188, {}), {76}: (187, {}), {77}: (186, {}), {78}: (185, {}), {79}: (184, {}), {80}: (288, {}), {81}: (183, {}), {82}: (182, {}), {83}: (181, {}), {84}: (180, {}), {85}: (179, {}), {86}: (289, {}), {87}: (178, {}), {88}: (177, {}), {89}: (176, {}), {90}: (175, {}), {91}: (174, {}), {92}: (291, {}), {93}: (173, {}), {94}: (172, {}), {95}: (171, {}), {96}: (170, {}), {97}: (169, {}), {98}: (292, {}), {99}: (168, {}), {100}: (167, {}), {101}: (166, {}), {102}: (165, {}), {103}: (164, {}), {104}: (294, {}), {105}: (163, {}), {106}: (162, {}), {107}: (161, {}), {108}: (160, {}), {109}: (159, {}), {110}: (295, {}), {111}: (158, {}), {112}: (157, {}), {113}: (156, {}), {114}: (155, {}), {115}: (154, {}), {116}: (297, {}), {117}: (153, {}), {118}: (152, {}), {119}: (151, {}), {120}: (150, {}), {121}: (149, {}), {122}: (298, {}), {123}: (148, {}), {124}: (147, {}), {125}: (146, {}), {126}: (145, {}), {127}: (144, {}), {128}: (300, {}), {129}: (143, {}), {130}: (142, {}), {131}: (141, {}), {132}: (140, {}), {133}: (139, {}), {134}: (301, {}), {135}: (138, {}), {136}: (137, {}), {137}: (136, {}), {138}: (135, {}), {139}: (134, {}), {140}: (303, {}), {141}: (133, {}), {142}: (132, {}), {143}: (131, {}), {144}: (130, {}), {145}: (129, {}), {146}: (304, {}), {147}: (128, {}), {148}: (127, {}), {149}: (126, {}), {150}: (125, {}), {151}: (124, {}), {152}: (306, {}), {153}: (123, {}), {154}: (122, {}), {155}: (121, {}), {156}: (120, {}), {157}: (119, {}), {158}: (307, {}), {159}: (118, {}), {160}: (117, {}), {161}: (116, {}), {162}: (115, {}), {163}: (114, {}), {164}: (309, {}), {165}: (113, {}), {166}: (112, {}), {167}: (111, {}), {168}: (110, {}), {169}: (109, {}), {170}: (310, {}), {171}: (108, {}), {172}: (107, {}), {173}: (106, {}), {174}: (105, {}), {175}: (104, {}), {176}: (312, {}), {177}: (103, {}), {178}: (102, {}), {179}: (101, {}), {180}: (100, {}), {181}: (99, {}), {182}: (313, {}), {183}: (98, {}), {184}: (97, {}), {185}: (96, {}), {186}: (95, {}), {187}: (94, {}), {188}: (315, {}), {189}: (93, {}), {190}: (92, {}), {191}: (91, {}), {192}: (90, {}), {193}: (89, {}), {194}: (316, {}), {195}: (88, {}), {196}: (87, {}), {197}: (86, {}), {198}: (85, {}), {199}: (84, {}), {200}: (318, {}), {201}: (83, {}), {202}: (82, {}), {203}: (81, {}), {204}: (80, {}), {205}: (79, {}), {206}: (319, {}), {207}: (78, {}), {208}: (77, {}), {209}: (76, {}), {210}: (75, {}), {211}: (74, {}), {212}: (321, {}), {213}: (73, {}), {214}: (72, {}), {215}: (71, {}), {216}: (70, {}), {217}: (69, {}), {218}: (322, {}), {219}: (68, {}), {220}: (67, {}), {221}: (66, {}), {222}: (65, {}), {223}: (64, {}), {224}: (324, {}), {225}: (63, {}), {226}: (62, {}), {227}: (61, {}), {228}: (60, {}), {229}: (59, {}), {230}: (325, {}), {231}: (58, {}), {232}: (57, {}), {233}: (56, {}), {234}: (55, {}), {235}: (54, {}), {236}: (327, {}), {237}: (53, {}), {238}: (52, {}), {239}: (51, {}), {240}: (50, {}), {241}: (49, {}), {242}: (328, {}), {243}: (48, {}), {244}: (47, {}), {245}: (46, {}), {246}: (45, {}), {247}: (44, {}), {248}: (330, {}), {249}: (43, {}), {250}: (42, {}), {251}: (41, {}), {252}: (40, {}), {253}: (39, {}), {254}: (331, {}), {255}: (38, {}), {256}: (37, {}), {257}: (36, {}), {258}: (35, {}), {259}: (34, {}), {260}: (333, {}), {261}: (33, {}), {262}: (32, {}), {263}: (31, {}), {264}: (30, {}), {265}: (29, {}), {266}: (334, {}), {267}: (28, {}), {268}: (27, {}), {269}: (26, {}), {270}: (25, {}), {271}: (24, {}), {272}: (336, {}), {273}: (23, {}), {274}: (22, {}), {275}: (21, {}), {276}: (20, {}), {277}: (19, {}), {278}: (337, {}), {279}: (18, {}), {280}: (17, {}), {281}: (16, {}), {282}: (15, {}), {283}: (14, {}), {284}: (339, {}), {285}: (13, {}), {286}: (12, {}), {287}: (8, {}), {288}: (7, {}), {289}: (5, {}), {290}: (340, {}), {291}: (346, {}), {292}: (4, {}), {377}: (342, {}) }\r\n2020-06-30 21:49:29.305162: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] \r\n2020-06-30 21:49:29.305173: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %MaxComputation.2092 (x.2093: f32[], y.2094: f32[]) -> f32[] {\r\n2020-06-30 21:49:29.305181: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %x.2093 = f32[] parameter(0)\r\n2020-06-30 21:49:29.305196: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %y.2094 = f32[] parameter(1)\r\n2020-06-30 21:49:29.305204: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] ROOT %maximum.2095 = f32[] maximum(f32[] %x.2093, f32[] %y.2094)\r\n2020-06-30 21:49:29.305212: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] }\r\n2020-06-30 21:49:29.305221: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] \r\n2020-06-30 21:49:29.305235: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %AddComputation.2101 (x.2102: f32[], y.2103: f32[]) -> f32[] {\r\n2020-06-30 21:49:29.305244: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %x.2102 = f32[] parameter(0)\r\n2020-06-30 21:49:29.305254: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %y.2103 = f32[] parameter(1)\r\n2020-06-30 21:49:29.305264: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] ROOT %add.2104 = f32[] add(f32[] %x.2102, f32[] %y.2103)\r\n2020-06-30 21:49:29.305273: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] }\r\n2020-06-30 21:49:29.305283: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] \r\n.\r\n.\r\n.\r\n.\r\n2020-06-30 21:49:29.568300: E 5603 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %subtract.5549 = f32[] subtract(f32[] %constant.5532, f32[] %constant.5533)\r\n2020-06-30 21:49:29.568320: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %constant.20745 = f32[] constant(0.125)\r\n2020-06-30 21:49:29.568321: E 5603 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %broadcast.5550 = f32[1,16,128,128]{3,2,1,0} broadcast(f32[] %subtract.5549), dimensions={}\r\n2020-06-30 21:49:29.568331: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %broadcast.20746 = f32[1024,4096]{1,0} broadcast(f32[] %constant.20745), dimensions={}\r\n2020-06-30 21:49:29.568332: E 5603 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %multiply.5551 = f32[1,16,128,128]{3,2,1,0} multiply(f32[1,16,128,128]{3,2,1,0} %multiply.5548, f32[1,16,128,128]{3,2,1,0} %broadcast.5550)\r\n2020-06-30 21:49:29.568342: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %multiply.20747 = f32[1024,4096]{1,0} multiply(f32[1024,4096]{1,0} %get-tuple-element.20744, f32[1024,4096]{1,0} %broadcast.20746)\r\n2020-06-30 21:49:29.568344: E 5603 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %broadcast.5552 = f32[1,16,128,128]{3,2,1,0} broadcast(f32[] %constant.5533), dimensions={}\r\n2020-06-30 21:49:29.568353: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %reshape.25706 = f32[1,1]{1,0} reshape(f32[] %p263.1975)\r\n2020-06-30 21:49:29.568354: E 5603 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %add.5553 = f32[1,16,128,128]{3,2,1,0} add(f32[1,16,128,128]{3,2,1,0} %multiply.5551, f32[1,16,128,128]{3,2,1,0} %broadcast.5552)\r\n.\r\n.\r\n.\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n (1) Resource exhausted: Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.\r\n\r\nLargest program allocations in vmem:\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n\t [[{{node XRTCompile}}]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n\t [[XRTCompile_G6]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n0 successful operations.\r\n0 derived errors ignored.\r\nTraceback (most recent call last):\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 235, in _mp_start_fn\r\n _start_fn(index, pf_cfg, fn, args)\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 229, in _start_fn\r\n fn(gindex, *args)\r\n File \"/export/share/akhilesh-gotmare/tpu_gedi/transformers/examples/run_tpu_glue.py\", line 797, in _mp_fn\r\n main(args)\r\n File \"/export/share/akhilesh-gotmare/tpu_gedi/transformers/examples/run_tpu_glue.py\", line 607, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer, disable_logging=disable_logging)\r\n File \"/export/share/akhilesh-gotmare/tpu_gedi/transformers/examples/run_tpu_glue.py\", line 186, in train\r\n for step, batch in enumerate(epoch_iterator):\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/tqdm/std.py\", line 1107, in __iter__\r\n for obj in iterable:\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/parallel_loader.py\", line 31, in __next__\r\n return self.next()\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/parallel_loader.py\", line 37, in next\r\n xm.mark_step()\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/core/xla_model.py\", line 549, in mark_step\r\n wait=xu.getenv_as('XLA_SYNC_WAIT', bool, False))\r\nRuntimeError: Resource exhausted: From /job:tpu_worker/replica:0/task:0:\r\n2 root error(s) found.\r\n (0) Resource exhausted: Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.\r\n\r\nLargest program allocations in vmem:\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n\t [[{{node XRTCompile}}]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n (1) Resource exhausted: Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.\r\n\r\nLargest program allocations in vmem:\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n\t [[{{node XRTCompile}}]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n\t [[XRTCompile_G6]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n0 successful operations.\r\n0 derived errors ignored.\r\nTraceback (most recent call last):\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 235, in _mp_start_fn\r\n _start_fn(index, pf_cfg, fn, args)\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 229, in _start_fn\r\n fn(gindex, *args)\r\n File \"/export/share/akhilesh-gotmare/tpu_gedi/transformers/examples/run_tpu_glue.py\", line 797, in _mp_fn\r\n main(args)\r\n File \"/export/share/akhilesh-gotmare/tpu_gedi/transformers/examples/run_tpu_glue.py\", line 607, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer, disable_logging=disable_logging)\r\n File \"/export/share/akhilesh-gotmare/tpu_gedi/transformers/examples/run_tpu_glue.py\", line 186, in train\r\n for step, batch in enumerate(epoch_iterator):\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/tqdm/std.py\", line 1107, in __iter__\r\n for obj in iterable:\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/parallel_loader.py\", line 31, in __next__\r\n return self.next()\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/parallel_loader.py\", line 37, in next\r\n xm.mark_step()\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/core/xla_model.py\", line 549, in mark_step\r\n wait=xu.getenv_as('XLA_SYNC_WAIT', bool, False))\r\nRuntimeError: Resource exhausted: From /job:tpu_worker/replica:0/task:0:\r\n2 root error(s) found.\r\n (0) Resource exhausted: Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.\r\n\r\nLargest program allocations in vmem:\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n\t [[{{node XRTCompile}}]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n (1) Resource exhausted: Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.\r\n\r\nLargest program allocations in vmem:\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...\r\n Allocation type: scoped\r\n\r\n\t [[{{node XRTCompile}}]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n\t [[XRTCompile_G6]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n0 successful operations.\r\n0 derived errors ignored.\r\n/root/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown\r\n len(cache))\r\n/root/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown\r\n len(cache))\r\n/root/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown\r\n len(cache))\r\n/root/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown\r\n len(cache))\r\n/root/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown\r\n len(cache))\r\n/root/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown\r\n len(cache))\r\nTraceback (most recent call last):\r\n File \"run_tpu_glue.py\", line 806, in <module>\r\n main_cli()\r\n File \"run_tpu_glue.py\", line 802, in main_cli\r\n xmp.spawn(_mp_fn, args=(args,), nprocs=args.num_cores)\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py\", line 300, in spawn\r\n start_method=start_method)\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py\", line 158, in start_processes\r\n while not context.join():\r\n File \"/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py\", line 113, in join\r\n (error_index, exitcode)\r\nException: process 6 terminated with exit code 17\r\n```\r\n\r\n\r\n\r\n\r\n" ]
1,586
1,593
1,586
COLLABORATOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3702/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3702/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3702", "html_url": "https://github.com/huggingface/transformers/pull/3702", "diff_url": "https://github.com/huggingface/transformers/pull/3702.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3702.patch", "merged_at": 1586537634000 }
https://api.github.com/repos/huggingface/transformers/issues/3701
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3701/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3701/comments
https://api.github.com/repos/huggingface/transformers/issues/3701/events
https://github.com/huggingface/transformers/issues/3701
596,685,019
MDU6SXNzdWU1OTY2ODUwMTk=
3,701
Problem with https://transformer.huggingface.co/doc/gpt2-xl
{ "login": "MarxEngelsLeninStalin", "id": 63361120, "node_id": "MDQ6VXNlcjYzMzYxMTIw", "avatar_url": "https://avatars.githubusercontent.com/u/63361120?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MarxEngelsLeninStalin", "html_url": "https://github.com/MarxEngelsLeninStalin", "followers_url": "https://api.github.com/users/MarxEngelsLeninStalin/followers", "following_url": "https://api.github.com/users/MarxEngelsLeninStalin/following{/other_user}", "gists_url": "https://api.github.com/users/MarxEngelsLeninStalin/gists{/gist_id}", "starred_url": "https://api.github.com/users/MarxEngelsLeninStalin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MarxEngelsLeninStalin/subscriptions", "organizations_url": "https://api.github.com/users/MarxEngelsLeninStalin/orgs", "repos_url": "https://api.github.com/users/MarxEngelsLeninStalin/repos", "events_url": "https://api.github.com/users/MarxEngelsLeninStalin/events{/privacy}", "received_events_url": "https://api.github.com/users/MarxEngelsLeninStalin/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Duplicate of https://github.com/huggingface/transformers/issues/3452.\r\n\r\nWe should just remove the option cc @julien-c ", "You’re right, I’ll get on it", "Also best username ever, @MarxEngelsLeninStalin ", "Oh, I like that model as someone who has problems with spelling/grammar and fatigue it's useful to use xl one has more knowledge on it (yes I know it's not accurate but more accurate then large model). Is there any prospect of it being turned on again in the future?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@Marxist-Leninist, unfortunately adding it back isn't on our near future roadmap." ]
1,586
1,591
1,591
NONE
null
Hi, when using it, simple switch the button up to xl one. When you try to generate a prediction of words; it will simply never load regardless of what all the other setting are on. Every other gpt2 model size works; expect the xl one. Tried it on Android and Linux same problem and different browsers Firefox and Google Chrome same problem nothing to with addons cause the same problem occurs with them all off.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3701/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3701/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3700
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3700/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3700/comments
https://api.github.com/repos/huggingface/transformers/issues/3700/events
https://github.com/huggingface/transformers/issues/3700
596,618,877
MDU6SXNzdWU1OTY2MTg4Nzc=
3,700
Would the weights for the main body of the pertained GPT2Model and pertained GPT2DoubleHeadsModel be identical?
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi, except if you freeze the transformer's (transformer being the base model) weights during the training, these will be modified during the fine-tuning. I believe none of our scripts in the library freeze the transformer's weight; so the weights would not be identical.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,591
1,591
NONE
null
Hello, From my understanding, the difference between the GPT2Model and GPT2DoubleHeadsModel is that GPT2Model does not include any output head, whereas GPT2DoubleHeadsModel includes two types of output heads (lmhead and mchead). I am wondering, would the weights used in the pertained GPT2Model identical to the weights used in the main body (every parts of the model except the output heads) of the pertained GPT2DoubleHeadsModel? or would the two sets of weights be different since GPT2DoubleHeadsModel was trained after including the output heads, whereas GPT2Model was trained without any output head? Thank you (I hope my question is understandable),
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3700/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3700/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3699
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3699/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3699/comments
https://api.github.com/repos/huggingface/transformers/issues/3699/events
https://github.com/huggingface/transformers/issues/3699
596,601,173
MDU6SXNzdWU1OTY2MDExNzM=
3,699
Bug in ElectraForTokenClassification
{ "login": "LiZongyue", "id": 36918088, "node_id": "MDQ6VXNlcjM2OTE4MDg4", "avatar_url": "https://avatars.githubusercontent.com/u/36918088?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LiZongyue", "html_url": "https://github.com/LiZongyue", "followers_url": "https://api.github.com/users/LiZongyue/followers", "following_url": "https://api.github.com/users/LiZongyue/following{/other_user}", "gists_url": "https://api.github.com/users/LiZongyue/gists{/gist_id}", "starred_url": "https://api.github.com/users/LiZongyue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LiZongyue/subscriptions", "organizations_url": "https://api.github.com/users/LiZongyue/orgs", "repos_url": "https://api.github.com/users/LiZongyue/repos", "events_url": "https://api.github.com/users/LiZongyue/events{/privacy}", "received_events_url": "https://api.github.com/users/LiZongyue/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Oh no, it should be `self.config.num_labels` instead of `self.num_labels` [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_electra.py#L665)!", "My bad, it should be fixed now!" ]
1,586
1,586
1,586
NONE
null
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): ElectraForTokenClassification Language I am using the model on (English, Chinese ...): English The problem arises when using: * [√] the official example scripts: (give details below) huggingface.co/transformers/model_doc/electra.html#electrafortokenclassification * [Γ—] my own modified scripts: (give details below) The tasks I am working on is: * [ Γ—] an official GLUE/SQUaD task: (give the name) * [ √] my own task or dataset: (give details below) Ran the example given in the documentation from transformers import ElectraTokenizer, ElectraForTokenClassification import torch tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator') model = ElectraForTokenClassification.from_pretrained('google/electra-small-discriminator') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1 labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0) # Batch size 1 outputs = model(input_ids, labels=labels) loss, scores = outputs[:2] ## To reproduce Steps to reproduce the behavior: 1. Start Google Colab 2. Install Transformer and requriements 3. run the code <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior AttributeError: 'ElectraForTokenClassification' object has no attribute 'num_labels' <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.8.0 - Platform: Windows 10 - Python version: 3.7.6 - PyTorch version (GPU?): Colab - Tensorflow version (GPU?): Colab - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3699/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3698
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3698/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3698/comments
https://api.github.com/repos/huggingface/transformers/issues/3698/events
https://github.com/huggingface/transformers/pull/3698
596,600,635
MDExOlB1bGxSZXF1ZXN0NDAwODY2ODc1
3,698
More doc for model cards
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[]
1,586
1,586
1,586
MEMBER
null
see https://github.com/huggingface/transformers/pull/3679#pullrequestreview-389368270
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3698/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3698/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3698", "html_url": "https://github.com/huggingface/transformers/pull/3698", "diff_url": "https://github.com/huggingface/transformers/pull/3698.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3698.patch", "merged_at": 1586362372000 }
https://api.github.com/repos/huggingface/transformers/issues/3697
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3697/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3697/comments
https://api.github.com/repos/huggingface/transformers/issues/3697/events
https://github.com/huggingface/transformers/pull/3697
596,519,422
MDExOlB1bGxSZXF1ZXN0NDAwODAxNTM5
3,697
Fix force_download of files on Windows
{ "login": "calpt", "id": 36051308, "node_id": "MDQ6VXNlcjM2MDUxMzA4", "avatar_url": "https://avatars.githubusercontent.com/u/36051308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/calpt", "html_url": "https://github.com/calpt", "followers_url": "https://api.github.com/users/calpt/followers", "following_url": "https://api.github.com/users/calpt/following{/other_user}", "gists_url": "https://api.github.com/users/calpt/gists{/gist_id}", "starred_url": "https://api.github.com/users/calpt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/calpt/subscriptions", "organizations_url": "https://api.github.com/users/calpt/orgs", "repos_url": "https://api.github.com/users/calpt/repos", "events_url": "https://api.github.com/users/calpt/events{/privacy}", "received_events_url": "https://api.github.com/users/calpt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can't test this right now as I don't have access to a Windows machine - does this still work if the model doesn't already exist?", "> \r\n> \r\n> Can't test this right now as I don't have access to a Windows machine - does this still work if the model doesn't already exist?\r\n\r\nYes, I tested that πŸ˜„. There shouldn't be any further differences between `rename` and `replace`. Here's [what the documentation says](https://docs.python.org/3/library/os.html#os.replace)." ]
1,586
1,586
1,586
CONTRIBUTOR
null
On Windows, `os.rename` gives the error ``` FileExistsError: [WinError 183] Cannot create a file when that file already exists: ``` when trying to re-download a model that already exists in cache using `force_download=True`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3697/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3697", "html_url": "https://github.com/huggingface/transformers/pull/3697", "diff_url": "https://github.com/huggingface/transformers/pull/3697.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3697.patch", "merged_at": 1586457897000 }
https://api.github.com/repos/huggingface/transformers/issues/3696
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3696/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3696/comments
https://api.github.com/repos/huggingface/transformers/issues/3696/events
https://github.com/huggingface/transformers/issues/3696
596,494,132
MDU6SXNzdWU1OTY0OTQxMzI=
3,696
Can not set different token and model dir in `run_glue.py`
{ "login": "Lapis-Hong", "id": 23524486, "node_id": "MDQ6VXNlcjIzNTI0NDg2", "avatar_url": "https://avatars.githubusercontent.com/u/23524486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Lapis-Hong", "html_url": "https://github.com/Lapis-Hong", "followers_url": "https://api.github.com/users/Lapis-Hong/followers", "following_url": "https://api.github.com/users/Lapis-Hong/following{/other_user}", "gists_url": "https://api.github.com/users/Lapis-Hong/gists{/gist_id}", "starred_url": "https://api.github.com/users/Lapis-Hong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lapis-Hong/subscriptions", "organizations_url": "https://api.github.com/users/Lapis-Hong/orgs", "repos_url": "https://api.github.com/users/Lapis-Hong/repos", "events_url": "https://api.github.com/users/Lapis-Hong/events{/privacy}", "received_events_url": "https://api.github.com/users/Lapis-Hong/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,592
1,592
NONE
null
# πŸ› Bug ## Information the original code as follows, when i use args.tokenizer_name different from model_name_or_path, it still call for the model_name_or_path config file. tokenizer = AutoTokenizer.from_pretrained( args.tokenizer_name if args.tokenizer_name else args.model_name_or_path, do_lower_case=args.do_lower_case, cache_dir=args.cache_dir if args.cache_dir else None, ) It would be solved by adding config params as follows: tokenizer = AutoTokenizer.from_pretrained( args.tokenizer_name if args.tokenizer_name else args.model_name_or_path, do_lower_case=args.do_lower_case, config=config, cache_dir=args.cache_dir if args.cache_dir else None, )
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3696/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3696/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3695
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3695/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3695/comments
https://api.github.com/repos/huggingface/transformers/issues/3695/events
https://github.com/huggingface/transformers/issues/3695
596,475,184
MDU6SXNzdWU1OTY0NzUxODQ=
3,695
Deserialize BERT Sequence Lassifier Quantized Model & Inferencing Issue
{ "login": "suryapa1", "id": 6042186, "node_id": "MDQ6VXNlcjYwNDIxODY=", "avatar_url": "https://avatars.githubusercontent.com/u/6042186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/suryapa1", "html_url": "https://github.com/suryapa1", "followers_url": "https://api.github.com/users/suryapa1/followers", "following_url": "https://api.github.com/users/suryapa1/following{/other_user}", "gists_url": "https://api.github.com/users/suryapa1/gists{/gist_id}", "starred_url": "https://api.github.com/users/suryapa1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suryapa1/subscriptions", "organizations_url": "https://api.github.com/users/suryapa1/orgs", "repos_url": "https://api.github.com/users/suryapa1/repos", "events_url": "https://api.github.com/users/suryapa1/events{/privacy}", "received_events_url": "https://api.github.com/users/suryapa1/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,592
1,592
NONE
null
Dynamic Quantization: Loading Quantized Model issue here: Currently am looking at below collab link for quantization, where all Linear layers are quantized in corresponding BERTForSequence Classification. Step:1, Serialized Quantized Model Step 2: Deserialized Using below code: Screenshot#1: Linear layers of the encoder are converted to DynamicQuantizedLinear Layers just after conversion Once loaded serialized quantized model for future use couldn't show up DynamicQuantizedLinear layers rather Linear layers for the query, key, values as shown in screenshots. Also, inference these deserialized models shows error predictions as shown here, so someone guide me how to deserialize quantized models as it is taking hell lot of time to figure out the same. https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/dynamic_quantization_bert_tutorial.ipynb?authuser=1#scrollTo=dUJ1NGinLAa1 ## Information <img width="922" alt="Screenshot 2020-04-08 at 3 50 23 PM" src="https://user-images.githubusercontent.com/6042186/78773790-5c201480-79b1-11ea-930a-aff268c97394.png"> <img width="681" alt="Screenshot 2020-04-08 at 3 49 08 PM" src="https://user-images.githubusercontent.com/6042186/78773823-6a6e3080-79b1-11ea-88ef-406889880e83.png">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3695/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3695/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3694
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3694/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3694/comments
https://api.github.com/repos/huggingface/transformers/issues/3694/events
https://github.com/huggingface/transformers/issues/3694
596,446,109
MDU6SXNzdWU1OTY0NDYxMDk=
3,694
Extending XLM Roberta for Question Answering
{ "login": "anlausch", "id": 20592651, "node_id": "MDQ6VXNlcjIwNTkyNjUx", "avatar_url": "https://avatars.githubusercontent.com/u/20592651?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anlausch", "html_url": "https://github.com/anlausch", "followers_url": "https://api.github.com/users/anlausch/followers", "following_url": "https://api.github.com/users/anlausch/following{/other_user}", "gists_url": "https://api.github.com/users/anlausch/gists{/gist_id}", "starred_url": "https://api.github.com/users/anlausch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anlausch/subscriptions", "organizations_url": "https://api.github.com/users/anlausch/orgs", "repos_url": "https://api.github.com/users/anlausch/repos", "events_url": "https://api.github.com/users/anlausch/events{/privacy}", "received_events_url": "https://api.github.com/users/anlausch/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I've worked around this problem now by not extending the huggingface framework directly but implementing XLMRobertaForQuestionAnswering externally (using huggingface classes etc.). ", "Would make sense to have it as a new feature though but that's a different issue type. ;)", "Could you share the code or make PR? ", "@djstrong The code corresponds to my solution presented above. The issue was related to me changing the library instead of just adding the file externally." ]
1,586
1,587
1,586
NONE
null
# ❓ Questions & Help The AutoModelForQuestionAnswering is supported by many models, but not yet by XLM Roberta. In the current implementation I could see that most task-specific classes for XLM-R, e.g. XLMRobertaForSequenceClassification are just inheriting from Roberta. However, when I try to extend the class analogously, the process fails. This is my extension: `from transformers.modeling_roberta import ( RobertaForQuestionAnswering, )` and `@add_start_docstrings( XLM_ROBERTA_START_DOCSTRING, )` `class XLMRobertaForQuestionAnswering(RobertaForQuestionAnswering): config_class = XLMRobertaConfig pretrained_model_archive_map = XLM_ROBERTA_PRETRAINED_MODEL_ARCHIVE_MAP ` The error message I get is > Traceback (most recent call last): File "..miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 289, in _check_seekable f.seek(f.tell()) AttributeError: 'NoneType' object has no attribute 'seek' > During handling of the above exception, another exception occurred: > Traceback (most recent call last): File "../miniconda3/lib/python3.6/site-packages/transformers/modeling_utils.py", line 516, in from_pretrained state_dict = torch.load(resolved_archive_file, map_location="cpu") File "../miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 525, in load with _open_file_like(f, 'rb') as opened_file: File "../miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 217, in _open_file_like return _open_buffer_reader(name_or_buffer) File "../miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 202, in __init__ _check_seekable(buffer) File "../miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 292, in _check_seekable raise_err_msg(["seek", "tell"], e) File "/home/anlausch/miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 285, in raise_err_msg raise type(e)(msg) AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead. > File "../modeling_auto.py", line 968, in from_pretrained return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) File "..miniconda3/lib/python3.6/site-packages/transformers/modeling_utils.py", line 519, in from_pretrained "Unable to load weights from pytorch checkpoint file. " Any idea what's going on here?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3694/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3694/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3693
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3693/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3693/comments
https://api.github.com/repos/huggingface/transformers/issues/3693/events
https://github.com/huggingface/transformers/issues/3693
596,428,570
MDU6SXNzdWU1OTY0Mjg1NzA=
3,693
How can I pinpoints Logs directory to Google Drive while finetuning GPT-2 model, which helps in visualizing data via tensorboard?
{ "login": "hmdgit", "id": 59701320, "node_id": "MDQ6VXNlcjU5NzAxMzIw", "avatar_url": "https://avatars.githubusercontent.com/u/59701320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hmdgit", "html_url": "https://github.com/hmdgit", "followers_url": "https://api.github.com/users/hmdgit/followers", "following_url": "https://api.github.com/users/hmdgit/following{/other_user}", "gists_url": "https://api.github.com/users/hmdgit/gists{/gist_id}", "starred_url": "https://api.github.com/users/hmdgit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hmdgit/subscriptions", "organizations_url": "https://api.github.com/users/hmdgit/orgs", "repos_url": "https://api.github.com/users/hmdgit/repos", "events_url": "https://api.github.com/users/hmdgit/events{/privacy}", "received_events_url": "https://api.github.com/users/hmdgit/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,592
1,592
NONE
null
Hi, I am facing an issue in retrieving logs, while fine-tuning GPT-2 model by using [Google Colab](https://colab.research.google.com/github/interactive-fiction-class/interactive-fiction-class.github.io/blob/master/homeworks/language-model/hw4_transformer.ipynb). As the finetunining takes several hours, Google Colab halts the running process at certain point in time, even there are remaining epochs left behind. In that case, I can be able to successfully continue my finetuning by including a parameter "should_continue" while running the script using [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). However, my logs vanished, and I could not be to retrieve data of tensorbaord, which has been run with these commands ``` %load_ext tensorboard %tensorboard --logdir=runs ``` **Is there a way to pin point my logs to the Google Drive, so that I can retrieve and visualize them with tensor-board at any point in time?**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3693/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3693/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3692
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3692/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3692/comments
https://api.github.com/repos/huggingface/transformers/issues/3692/events
https://github.com/huggingface/transformers/issues/3692
596,307,174
MDU6SXNzdWU1OTYzMDcxNzQ=
3,692
How to use Huggingface pytorch bert to generate the prediction TSV file from the test set of a GLUE task?
{ "login": "hrheru20", "id": 63332636, "node_id": "MDQ6VXNlcjYzMzMyNjM2", "avatar_url": "https://avatars.githubusercontent.com/u/63332636?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hrheru20", "html_url": "https://github.com/hrheru20", "followers_url": "https://api.github.com/users/hrheru20/followers", "following_url": "https://api.github.com/users/hrheru20/following{/other_user}", "gists_url": "https://api.github.com/users/hrheru20/gists{/gist_id}", "starred_url": "https://api.github.com/users/hrheru20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hrheru20/subscriptions", "organizations_url": "https://api.github.com/users/hrheru20/orgs", "repos_url": "https://api.github.com/users/hrheru20/repos", "events_url": "https://api.github.com/users/hrheru20/events{/privacy}", "received_events_url": "https://api.github.com/users/hrheru20/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This was implemented ~1 month ago so closing this issue." ]
1,586
1,591
1,591
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> Hello folks! Can you provide some simple example of using pytorch bert to generate the prediction TSV file from the test set of a GLUE task (such as MRPC) based on the fine-tuned model so that I can submit the prediction TSV file for each GLUE task to GLUE leaderboard? Thank you very much. <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3692/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3692/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3691
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3691/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3691/comments
https://api.github.com/repos/huggingface/transformers/issues/3691/events
https://github.com/huggingface/transformers/issues/3691
596,295,775
MDU6SXNzdWU1OTYyOTU3NzU=
3,691
cannot import name AddedToken
{ "login": "nrjvarshney", "id": 19836137, "node_id": "MDQ6VXNlcjE5ODM2MTM3", "avatar_url": "https://avatars.githubusercontent.com/u/19836137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nrjvarshney", "html_url": "https://github.com/nrjvarshney", "followers_url": "https://api.github.com/users/nrjvarshney/followers", "following_url": "https://api.github.com/users/nrjvarshney/following{/other_user}", "gists_url": "https://api.github.com/users/nrjvarshney/gists{/gist_id}", "starred_url": "https://api.github.com/users/nrjvarshney/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nrjvarshney/subscriptions", "organizations_url": "https://api.github.com/users/nrjvarshney/orgs", "repos_url": "https://api.github.com/users/nrjvarshney/repos", "events_url": "https://api.github.com/users/nrjvarshney/events{/privacy}", "received_events_url": "https://api.github.com/users/nrjvarshney/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Created another environment from scratch and it got resolved.", "you need install transformers this way:\r\n\r\n```\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\npip install .\r\n```", "> you need install transformers this way:\r\n> \r\n> ```\r\n> git clone https://github.com/huggingface/transformers\r\n> cd transformers\r\n> pip install .\r\n> ```\r\n\r\nThis worked for me thanks!", "> you need install transformers this way:\r\n> \r\n> ```\r\n> git clone https://github.com/huggingface/transformers\r\n> cd transformers\r\n> pip install .\r\n> ```\r\n\r\nI was having a similar issue on colab:\r\n```\r\ncan't pickle AddedToken objects\r\n```\r\nThis solution also worked for me. Thanks!", "> you need install transformers this way:\r\n> \r\n> ```\r\n> git clone https://github.com/huggingface/transformers\r\n> cd transformers\r\n> pip install .\r\n> ```\r\n\r\nThanks much, this worked for me!", "> you need install transformers this way:\r\n> \r\n> ```\r\n> git clone https://github.com/huggingface/transformers\r\n> cd transformers\r\n> pip install .\r\n> ```\r\n\r\nThis solution doesn't work for me! I don't know maybe there is a conflict with `pip` an `conda`. I guess after I installed a package `bert_score` by conda, this error appeared and won't go" ]
1,586
1,630
1,586
NONE
null
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): Albert Language I am using the model on (English, Chinese ...): English 27 from typing import List, Optional, Sequence, Tuple, Union 28 ---> 29 from tokenizers import AddedToken, Encoding 30 from tokenizers.decoders import Decoder 31 from tokenizers.implementations import BaseTokenizer ImportError: cannot import name 'AddedToken' The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name): * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3691/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3691/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3690
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3690/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3690/comments
https://api.github.com/repos/huggingface/transformers/issues/3690/events
https://github.com/huggingface/transformers/issues/3690
596,271,543
MDU6SXNzdWU1OTYyNzE1NDM=
3,690
BertSelfAttention have not Add and Norm layer, why???
{ "login": "weiliangxiao", "id": 20767734, "node_id": "MDQ6VXNlcjIwNzY3NzM0", "avatar_url": "https://avatars.githubusercontent.com/u/20767734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/weiliangxiao", "html_url": "https://github.com/weiliangxiao", "followers_url": "https://api.github.com/users/weiliangxiao/followers", "following_url": "https://api.github.com/users/weiliangxiao/following{/other_user}", "gists_url": "https://api.github.com/users/weiliangxiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/weiliangxiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/weiliangxiao/subscriptions", "organizations_url": "https://api.github.com/users/weiliangxiao/orgs", "repos_url": "https://api.github.com/users/weiliangxiao/repos", "events_url": "https://api.github.com/users/weiliangxiao/events{/privacy}", "received_events_url": "https://api.github.com/users/weiliangxiao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It's in [`BertSelfOutput`](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L261) which is called right after the `BertSelfAttention` in [`BertAttention`](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L316)." ]
1,586
1,586
1,586
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3690/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3690/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3689
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3689/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3689/comments
https://api.github.com/repos/huggingface/transformers/issues/3689/events
https://github.com/huggingface/transformers/issues/3689
596,270,697
MDU6SXNzdWU1OTYyNzA2OTc=
3,689
Can't update the train_batch_size and eval_batch_size for the training image in a docker container
{ "login": "tbs17", "id": 34946571, "node_id": "MDQ6VXNlcjM0OTQ2NTcx", "avatar_url": "https://avatars.githubusercontent.com/u/34946571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tbs17", "html_url": "https://github.com/tbs17", "followers_url": "https://api.github.com/users/tbs17/followers", "following_url": "https://api.github.com/users/tbs17/following{/other_user}", "gists_url": "https://api.github.com/users/tbs17/gists{/gist_id}", "starred_url": "https://api.github.com/users/tbs17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tbs17/subscriptions", "organizations_url": "https://api.github.com/users/tbs17/orgs", "repos_url": "https://api.github.com/users/tbs17/repos", "events_url": "https://api.github.com/users/tbs17/events{/privacy}", "received_events_url": "https://api.github.com/users/tbs17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,586
1,586
1,586
NONE
null
Hi, I tried to train a couple multi-label models with the fast-bert libary using the container files to build up the docker image and uploaded to AWS ECR and used the aws helper notebook that's included in the 'sample notebook' folder in the repo. I have trained for 3 models and **regardless I changed to a different train_batch_size in the hyperparameters.json file, the model when training still outputs total train batch size is 64 and eval batch size is 128.** **My question here:** - Am i not able to update the train batch size? if the training is happening in a container? - Does the training and eval batch size have some relationship? from a glance, it looks like eval_batch_size is doubled train_batch_size. I'm gonna say there shouldn't be any relationship. However, **why there's no parameter set in the hyperparameters.json to specify the eval_batch_size?** - The three models i have trained all got really good accuracy_thresh with above 0.97. **However, one of the models only outputs 2 classes as the top probability class.** The original data has about 9455 rows and 113 classes. I have also trained it on BERT tensorflow version, i was able to get multiple labels as top predicted class. **what could be possibly wrong?** Note that my other 2 models has about 36 classes and 11 classes. The top predicted classes all came out reasonable, meaning all the 36 and 11 classes showed up for the top predicted class. In addition, i don't see the performance change whenever i changed the accuracy_thresh change after epoch 2. **Please provide some guidance as this is going into deployment soon** but I'm still struggling to figure out why....
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3689/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3689/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3688
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3688/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3688/comments
https://api.github.com/repos/huggingface/transformers/issues/3688/events
https://github.com/huggingface/transformers/pull/3688
596,239,184
MDExOlB1bGxSZXF1ZXN0NDAwNTczNzk1
3,688
Big cleanup of `glue_convert_examples_to_features`
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3688?src=pr&el=h1) Report\n> Merging [#3688](https://codecov.io/gh/huggingface/transformers/pull/3688?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/715aa5b1356b878cbab7a7415a1c1b03a7777ae2&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `10.63%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3688/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3688?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3688 +/- ##\n==========================================\n+ Coverage 78.02% 78.06% +0.03% \n==========================================\n Files 104 104 \n Lines 17710 17708 -2 \n==========================================\n+ Hits 13819 13823 +4 \n+ Misses 3891 3885 -6 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3688?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/3688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `24.68% <0.00%> (ΓΈ)` | |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/3688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `29.79% <10.86%> (+2.26%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.84% <0.00%> (-0.13%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3688?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3688?src=pr&el=footer). Last update [715aa5b...b867779](https://codecov.io/gh/huggingface/transformers/pull/3688?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "There is a combinatorial explosion between `tokenizer classes` x `glue datasets` so it's not super practical to test everything, but:\r\n- I've tested \"roberta-large\", \"bert-base-uncased\", \"xlnet-based\" tokenizers x \"sst2\" and \"mnli\" datasets and results are identical βœ… \r\n- encoding a batch of sentences or sentence pairs, while padding to a specific length, is really a native feature of a tokenizer at this point, so [those lines](https://github.com/huggingface/transformers/pull/3688/files#diff-8bc8284670454c05520b097dd51ad787R137-R139) in the current PR call the canonical API to do that. If there's a discrepancy with the historical way of tokenizing at this point it's probably outside the scope of this PR.\r\n", "**Note, however**, that following this PR the performance boost associated with using a fast tokenizer coupled with using `batch_encode` seems very variable, cc: @n1t0 @mfuntowicz \r\n\r\nOn my (mac OS) local machine it takes pretty much the same time using a fast and a non-fast tokenizer (even though the fast one burns all my CPU cores).\r\n\r\nOn a Colab notebook seems like perf varies a lot between executions (https://colab.research.google.com/drive/1DXOegSz7Tyr7MeSHYBBg40kiDhu4-JPr?authuser=1#scrollTo=NlygQfeyg-5b), with the fast tokenizer not always being faster than the other one.\r\n\r\nSee [notebook](https://colab.research.google.com/drive/1DXOegSz7Tyr7MeSHYBBg40kiDhu4-JPr)\r\n\r\nWould be interesting to dive in and do a more systematic benchmark than I did, considering that GLUE is a good benchmark for a real-world training workload. " ]
1,586
1,586
1,586
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3688/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3688", "html_url": "https://github.com/huggingface/transformers/pull/3688", "diff_url": "https://github.com/huggingface/transformers/pull/3688.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3688.patch", "merged_at": 1586528418000 }
https://api.github.com/repos/huggingface/transformers/issues/3687
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3687/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3687/comments
https://api.github.com/repos/huggingface/transformers/issues/3687/events
https://github.com/huggingface/transformers/issues/3687
596,210,761
MDU6SXNzdWU1OTYyMTA3NjE=
3,687
Is it possible to use multiprocessing for pipelines?
{ "login": "Weilin37", "id": 5770543, "node_id": "MDQ6VXNlcjU3NzA1NDM=", "avatar_url": "https://avatars.githubusercontent.com/u/5770543?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Weilin37", "html_url": "https://github.com/Weilin37", "followers_url": "https://api.github.com/users/Weilin37/followers", "following_url": "https://api.github.com/users/Weilin37/following{/other_user}", "gists_url": "https://api.github.com/users/Weilin37/gists{/gist_id}", "starred_url": "https://api.github.com/users/Weilin37/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Weilin37/subscriptions", "organizations_url": "https://api.github.com/users/Weilin37/orgs", "repos_url": "https://api.github.com/users/Weilin37/repos", "events_url": "https://api.github.com/users/Weilin37/events{/privacy}", "received_events_url": "https://api.github.com/users/Weilin37/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I have the same the problem. When you close the issue does it mean that it is fixed? ", "I have also met this problem, how can I solve this? Thanks!!!!", "> I have also met this problem, how can I solve this? Thanks!!!!\r\n\r\nI switched to using `nlp.pipe`, which is the built-in function for multiprocessing, instead of doing it by hand. " ]
1,586
1,640
1,592
NONE
null
I am trying to use multiprocessing for pipelines, but it seems that it's not working. I think it's because the pipeline already uses multiprocessing features and so you can't have multiprocessing inception. Anyone able to get it to work?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3687/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3687/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3686
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3686/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3686/comments
https://api.github.com/repos/huggingface/transformers/issues/3686/events
https://github.com/huggingface/transformers/issues/3686
596,178,829
MDU6SXNzdWU1OTYxNzg4Mjk=
3,686
Bug in variable name in NER
{ "login": "TarasPriadka", "id": 14134797, "node_id": "MDQ6VXNlcjE0MTM0Nzk3", "avatar_url": "https://avatars.githubusercontent.com/u/14134797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TarasPriadka", "html_url": "https://github.com/TarasPriadka", "followers_url": "https://api.github.com/users/TarasPriadka/followers", "following_url": "https://api.github.com/users/TarasPriadka/following{/other_user}", "gists_url": "https://api.github.com/users/TarasPriadka/gists{/gist_id}", "starred_url": "https://api.github.com/users/TarasPriadka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TarasPriadka/subscriptions", "organizations_url": "https://api.github.com/users/TarasPriadka/orgs", "repos_url": "https://api.github.com/users/TarasPriadka/repos", "events_url": "https://api.github.com/users/TarasPriadka/events{/privacy}", "received_events_url": "https://api.github.com/users/TarasPriadka/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I also see the same issue, has there been a fix for this?\r\nAttributeError: 'BertTokenizer' object has no attribute 'num_added_tokens'", "@minhtuev you can change `num_added_tokens` to `num_special_tokens_to_add`. This made the fix for me" ]
1,586
1,588
1,588
NONE
null
# πŸ› Bug ## Information Model I am using (Bert, XLNet ...): Irrelevant Language I am using the model on (English, Chinese ...): Irrelevant The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: NER * [ ] my own task or dataset: (give details below) ## Error ``` File "/transformers/examples/ner/utils_ner.py", line 123, in convert_examples_to_features special_tokens_count = tokenizer.num_added_tokens() AttributeError: 'BertTokenizer' object has no attribute 'num_added_tokens' ``` ## Issue After update to new Tokenizers, some util files are broken. Found one in examples/ner/utils_ner.py. Need to change line 123 from num_added_tokens to num_special_tokens_to_add.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3686/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3686/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3685
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3685/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3685/comments
https://api.github.com/repos/huggingface/transformers/issues/3685/events
https://github.com/huggingface/transformers/issues/3685
596,164,340
MDU6SXNzdWU1OTYxNjQzNDA=
3,685
Requesting model for TFAlbertForQuestionAnswering
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! We don't currently have a `run_tf_squad` script, but we would appreciate a contribution! The preprocessing of SQuAD shouldn't be different between pytorch and tensorflow. We haven't gotten to testing that as we haven't gotten to writing that script yet.\r\n\r\nIf we're to have a TF SQuAD script, we would have to align the pre-processing techniques as well!", "And we would definitely welcome a PR introducing `TFAlbertForQuestionAnswering`!", "Resolved in https://github.com/huggingface/transformers/commit/6d00033e97e1751a897f2317fdfd35dd853cee29 ." ]
1,586
1,588
1,588
CONTRIBUTOR
null
# 🌟 New model addition Is there support for adding a TensorFlow version of the AlbertForQuestionAnswering model? I would be happy to contribute the work. This would also enable the `run_squad.py` example script for TensorFlow. It also looks like [the preprocessing of SQuAD data is different](https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/squad.py#L349) for PyTorch and TensorFlow. PyTorch has an option to return `all_example_index`, while TensorFlow does not. This means that running the model evaluation script (which takes the max F1/EM over all answers) is only possible in PT. Running SQuAD evaluation in TensorFlow is important for my use case, and I would love to understand the design decisions or difficulties that went into that decision. Again, happy to contribute - any support or objections for aligning the data preprocessing techniques?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3685/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3685/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3684
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3684/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3684/comments
https://api.github.com/repos/huggingface/transformers/issues/3684/events
https://github.com/huggingface/transformers/pull/3684
596,109,274
MDExOlB1bGxSZXF1ZXN0NDAwNDY2NTc0
3,684
Updating the TensorFlow models to work as expected with tokenizers v3.0.0
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,586
1,586
1,586
MEMBER
null
Models and tokenizers should work in harmony; this is why it is an API design choice to be able to send the output of `encode_plus` and `batch_encode_plus` straight to the model, in both PyTorch and TensorFlow: ```py encoded_sequence = tokenizer.encode_plus(sequence) model(encoded_sequence) # for TensorFlow model(**encoded_sequence) # for PyTorch ``` With the recent changes of tokenizers-v3.0.0 and the introduction of `BatchEncoding`, the way the TensorFlow models usually identified such inputs didn't work, as it was looking for a `dict` instead of a `BatchEncoding`. This PR patches this. This feature was previously untested; this PR addresses this by adding four different tests on each tokenizers; testing that the tokenizers return correct `BatchEncoding` objects in both PyTorch and TensorFlow which can be fed directly to the model. Both `encode_plus` and `batch_encode_plus` are tested. Some issues were found, and were patched with this PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3684/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3684/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3684", "html_url": "https://github.com/huggingface/transformers/pull/3684", "diff_url": "https://github.com/huggingface/transformers/pull/3684.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3684.patch", "merged_at": 1586377364000 }
https://api.github.com/repos/huggingface/transformers/issues/3683
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3683/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3683/comments
https://api.github.com/repos/huggingface/transformers/issues/3683/events
https://github.com/huggingface/transformers/pull/3683
596,086,589
MDExOlB1bGxSZXF1ZXN0NDAwNDQ3NTQ1
3,683
question-answering pipeline error : too many values to unpack (expected 2)
{ "login": "wwwehr", "id": 33910651, "node_id": "MDQ6VXNlcjMzOTEwNjUx", "avatar_url": "https://avatars.githubusercontent.com/u/33910651?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wwwehr", "html_url": "https://github.com/wwwehr", "followers_url": "https://api.github.com/users/wwwehr/followers", "following_url": "https://api.github.com/users/wwwehr/following{/other_user}", "gists_url": "https://api.github.com/users/wwwehr/gists{/gist_id}", "starred_url": "https://api.github.com/users/wwwehr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wwwehr/subscriptions", "organizations_url": "https://api.github.com/users/wwwehr/orgs", "repos_url": "https://api.github.com/users/wwwehr/repos", "events_url": "https://api.github.com/users/wwwehr/events{/privacy}", "received_events_url": "https://api.github.com/users/wwwehr/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1771187924, "node_id": "MDU6TGFiZWwxNzcxMTg3OTI0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline", "name": "Core: Pipeline", "color": "FF7066", "default": false, "description": "Internals of the library; Pipeline." } ]
closed
false
null
[]
[ "hmmm, `mrm8488/bert-uncased-finetuned-qnli` is a sequence classification model, not a QA model.\r\n\r\nYou probably get warnings while loading it in a QA Pipeline.\r\n\r\nDoes this happen with other (QA) models?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,594
1,594
NONE
null
For some `question-answering` models the pipleline encounters extra tuples from the `.model()` call, where we must have exactly 2. > This stub results in the following error: ```python from transformers.pipelines import pipeline model_name = "mrm8488/bert-uncased-finetuned-qnli" nlp = pipeline("question-answering", model=model_name, tokenizer=model_name) QA_input = { "question": "Why is model conversion important?", "context": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.", } res = nlp(QA_input) ``` > error message ```bash Traceback (most recent call last): File "test3.py", line 10, in <module> res = nlp(QA_input) File "/opt/conda/lib/python3.7/site-packages/transformers/pipelines.py", line 1010, in __call__ start, end = self.model(**fw_args) ValueError: too many values to unpack (expected 2) ``` > env ```text - `transformers` version: 2.8.0 - Platform: Linux-4.9.184-linuxkit-x86_64-with-debian-buster-sid - Python version: 3.7.4 - PyTorch version (GPU?): 1.4.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: nope - Using distributed or parallel set-up in script?: nope ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3683/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3683", "html_url": "https://github.com/huggingface/transformers/pull/3683", "diff_url": "https://github.com/huggingface/transformers/pull/3683.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3683.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/3682
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3682/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3682/comments
https://api.github.com/repos/huggingface/transformers/issues/3682/events
https://github.com/huggingface/transformers/pull/3682
596,068,862
MDExOlB1bGxSZXF1ZXN0NDAwNDMyOTI4
3,682
[T5, generation] Add decoder caching for T5
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3682?src=pr&el=h1) Report\n> Merging [#3682](https://codecov.io/gh/huggingface/transformers/pull/3682?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a594ee9c84dde933a3d0b4e07ff2994a1960574c&el=desc) will **increase** coverage by `0.07%`.\n> The diff coverage is `89.76%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3682/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3682?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3682 +/- ##\n==========================================\n+ Coverage 78.02% 78.09% +0.07% \n==========================================\n Files 104 104 \n Lines 17710 17786 +76 \n==========================================\n+ Hits 13818 13890 +72 \n- Misses 3892 3896 +4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3682?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.21% <89.68%> (+1.72%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.03% <100.00%> (+0.18%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3682?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3682?src=pr&el=footer). Last update [a594ee9...67ae81f](https://codecov.io/gh/huggingface/transformers/pull/3682?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,586
1,586
1,586
MEMBER
null
This PR greatly speeds up the autoregressive decoding for T5 by storing past key / value states. The summarization test: https://github.com/huggingface/transformers/blob/500aa12318ce5acd289d5edb6cb8266b3c3b162e/tests/test_modeling_t5.py#L260 now takes only 44s whereas before it took 311s -> 7.5x Speed up This will also significantly speed up the translation and summarization pipelines when using T5. - [x] Add key value state caching - [x] Test for equal output on hard-coded tests - [x] Add simple past tests including using an attention mask - [x] update the docstring - [x] clean up code The caching design was already somewhat outcommented in place. It was cleaned, made functional and implemented very similar to GPT2's one. ### IMPORTANT: This PR has a breaking change, in that it increases the default output length of T5Model and T5ForConditionalGeneration from 4 to 5 (including the `past_key_value_states`). ### Future PR: - [ ] Do the same for TF if this PR is accepted. Would be nice if you could take a look @craffel @thomwolf @LysandreJik @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3682/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3682", "html_url": "https://github.com/huggingface/transformers/pull/3682", "diff_url": "https://github.com/huggingface/transformers/pull/3682.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3682.patch", "merged_at": 1586473370000 }
https://api.github.com/repos/huggingface/transformers/issues/3681
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3681/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3681/comments
https://api.github.com/repos/huggingface/transformers/issues/3681/events
https://github.com/huggingface/transformers/pull/3681
596,054,277
MDExOlB1bGxSZXF1ZXN0NDAwNDIxMDA3
3,681
updating to new transformer
{ "login": "hsajjad", "id": 3755539, "node_id": "MDQ6VXNlcjM3NTU1Mzk=", "avatar_url": "https://avatars.githubusercontent.com/u/3755539?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hsajjad", "html_url": "https://github.com/hsajjad", "followers_url": "https://api.github.com/users/hsajjad/followers", "following_url": "https://api.github.com/users/hsajjad/following{/other_user}", "gists_url": "https://api.github.com/users/hsajjad/gists{/gist_id}", "starred_url": "https://api.github.com/users/hsajjad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hsajjad/subscriptions", "organizations_url": "https://api.github.com/users/hsajjad/orgs", "repos_url": "https://api.github.com/users/hsajjad/repos", "events_url": "https://api.github.com/users/hsajjad/events{/privacy}", "received_events_url": "https://api.github.com/users/hsajjad/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,586
1,586
1,586
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3681/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3681", "html_url": "https://github.com/huggingface/transformers/pull/3681", "diff_url": "https://github.com/huggingface/transformers/pull/3681.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3681.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/3680
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3680/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3680/comments
https://api.github.com/repos/huggingface/transformers/issues/3680/events
https://github.com/huggingface/transformers/issues/3680
596,043,180
MDU6SXNzdWU1OTYwNDMxODA=
3,680
How to use GPT2DoubleHeadsModel?
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I see that the issue is closed? what is the answer?", "We are trying to move such questions more and more to the forum because they get more traction there and the library's issues should primarily be used for \"real\" issues. \r\n\r\nIt would be awesome if you guys could post the question on \r\nhttps://discuss.huggingface.co/", "I meet the same question. " ]
1,586
1,688
1,601
NONE
null
Hello, I have a question about the example shown in the GPT2DoubleHeadsModel documentation page: https://huggingface.co/transformers/model_doc/gpt2.html#gpt2doubleheadsmodel In the example, the input to the GPT2DoubleHeadsModel is simply a set of choices. But what if the multiple choice question that I want to process also includes a question text? so for example, Bob likes candy; what does Bob like? a. Bag b. Burger c. Candy d. Pencil In the example above, the question text would be " Bob likes candy; what does Bob like?" and the choices will be the Bag, Burger, Candy and Pencil. How should I pre-process my multiple choice questions to be used with the GPT2DoubleHeadsModel? For example, given that the token that will be used for the classification is "<|endoftext|>" (this is the default eos token for the GPT2 models), would the following be fine? ```python import torch from transformers import GPT2Tokenizer, GPT2DoubleHeadsModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2DoubleHeadsModel.from_pretrained('gpt2') choices = [ "Bob likes candy ; what does Bob like ? Bag <|endoftext|>", "Bob likes candy ; what does Bob like ? Burger <|endoftext|>", "Bob likes candy ; what does Bob like ? Candy <|endoftext|>", "Bob likes candy ; what does Bob like ? Pencil <|endoftext|>"] encoded_choices = [tokenizer.encode(s) for s in choices] eos_token_location = [tokens.index(tokenizer.eos_token_id) for tokens in encoded_choices] input_ids = torch.tensor(encoded_choices).unsqueeze(0) mc_token_ids = torch.tensor([eos_token_location]) outputs = model(input_ids, mc_token_ids=mc_token_ids) lm_prediction_scores, mc_prediction_scores = outputs[:2] ``` Thank you,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3680/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3679
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3679/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3679/comments
https://api.github.com/repos/huggingface/transformers/issues/3679/events
https://github.com/huggingface/transformers/pull/3679
596,042,207
MDExOlB1bGxSZXF1ZXN0NDAwNDExMjU1
3,679
Update doc for {Summarization,Translation}Pipeline and other tweaks
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sshleifer It's not 100% needed to have model cards but definitely encouraged as they unlock new features e.g. here, discoverability of the models in the model hub, etc.\r\n\r\nYeah, adding an item to the checklist (I guess in `templates/adding_a_new_model`) would be nice, do you want to do it?" ]
1,586
1,586
1,586
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3679/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3679/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3679", "html_url": "https://github.com/huggingface/transformers/pull/3679", "diff_url": "https://github.com/huggingface/transformers/pull/3679.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3679.patch", "merged_at": 1586353502000 }
https://api.github.com/repos/huggingface/transformers/issues/3678
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3678/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3678/comments
https://api.github.com/repos/huggingface/transformers/issues/3678/events
https://github.com/huggingface/transformers/issues/3678
596,001,430
MDU6SXNzdWU1OTYwMDE0MzA=
3,678
run_generation.py with empty input
{ "login": "r0levrai", "id": 22660388, "node_id": "MDQ6VXNlcjIyNjYwMzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22660388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/r0levrai", "html_url": "https://github.com/r0levrai", "followers_url": "https://api.github.com/users/r0levrai/followers", "following_url": "https://api.github.com/users/r0levrai/following{/other_user}", "gists_url": "https://api.github.com/users/r0levrai/gists{/gist_id}", "starred_url": "https://api.github.com/users/r0levrai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/r0levrai/subscriptions", "organizations_url": "https://api.github.com/users/r0levrai/orgs", "repos_url": "https://api.github.com/users/r0levrai/repos", "events_url": "https://api.github.com/users/r0levrai/events{/privacy}", "received_events_url": "https://api.github.com/users/r0levrai/received_events", "type": "User", "site_admin": false }
[ { "id": 1834059054, "node_id": "MDU6TGFiZWwxODM0MDU5MDU0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Generation", "name": "Ex: Generation", "color": "06EFF8", "default": false, "description": "Natural Language Generation" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hi @r0levrai, \r\n\r\nsorry for responding so late. Thanks for spotting the error. Once the linked PR is merged generation with an empty prompt should be fine! \r\n\r\nAlso note that there is a generation pipeline now which you could use as follows (once the PR is merged):\r\n\r\n```python\r\nfrom transformers import pipeline\r\ngenerator = pipeline(\"text-generation\")\r\ngenerator(\"\") # empty prompt\r\n```", "This is good news, thanks!" ]
1,586
1,588
1,588
NONE
null
Hi, I would like to generate text following some context but also from scratch as seen in *Write with Transformer*. Using both * `python run_generation_callable.py --model_type=gpt2 --model_name_or_path=gpt2` and feeding an empty prompt to the `Model prompt >>> ` * or changing the line `prompt_text = args.prompt if args.prompt else input("Model prompt >>> ")` in the source to `prompt_text = args.prompt` and using `python run_generation_callable.py --model_type=gpt2 --model_name_or_path=gpt2 --prompt ""` result in a `RuntimeError: cannot reshape tensor of 0 elements into shape [-1, 0] because the unspecified dimension size -1 can be any value and is ambiguous` (tell me if you need the complete stack trace or more detail for the reproduction). Thanks by advance for any tips, workarounds or directions on achieving this!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3678/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3678/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3677
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3677/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3677/comments
https://api.github.com/repos/huggingface/transformers/issues/3677/events
https://github.com/huggingface/transformers/issues/3677
595,976,672
MDU6SXNzdWU1OTU5NzY2NzI=
3,677
Does anyone have the XLNet (and ALBERT) NER performance on CONLL-2003
{ "login": "bugface", "id": 16659741, "node_id": "MDQ6VXNlcjE2NjU5NzQx", "avatar_url": "https://avatars.githubusercontent.com/u/16659741?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bugface", "html_url": "https://github.com/bugface", "followers_url": "https://api.github.com/users/bugface/followers", "following_url": "https://api.github.com/users/bugface/following{/other_user}", "gists_url": "https://api.github.com/users/bugface/gists{/gist_id}", "starred_url": "https://api.github.com/users/bugface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bugface/subscriptions", "organizations_url": "https://api.github.com/users/bugface/orgs", "repos_url": "https://api.github.com/users/bugface/repos", "events_url": "https://api.github.com/users/bugface/events{/privacy}", "received_events_url": "https://api.github.com/users/bugface/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "With ALBERT v1:\r\n\r\nhttps://github.com/huggingface/transformers/pull/1683#issuecomment-556001607\r\n\r\nI got better results using the recent integrated ELECTRA model :)\r\n\r\n", "@stefan-it j/w but do you know why bert-lg fine-tuned is listed as achieving 92.8 f1 on conll03 in [this paper?](https://paperswithcode.com/sota/named-entity-recognition-ner-on-conll-2003) Noticing it's over a pt higher f1 than Transformers's version. ", "@stefan-it \r\n\r\nYour albert result is very close to what I got. \r\n\r\nBy any chance, did you experiment with the XLNet?\r\n\r\nMy problem is:\r\nIn https://github.com/stevezheng23/xlnet_extension_tf, the author reported the ner performance as 0.9267. But I can only obtain a performance of 0.7626 with the same batch size and learning rate but longer training steps using the this library. I would like to confirm if the problem is my implementation but there is no baseline on XLNet.", "@stefan-it Thank you for your contribution to the Electra model finetuned on CoNLL03. I see you shared the weights in this repository. Could you please share the license for these weights? I could not find a model card for it.", "Hi @stefan-it !\r\nI know that this issue is not about electra, but I have the same question regarding electra too :sweat_smile: \r\nI ran NER CoNLL-2003 training with electra small like this:\r\n\r\n`python examples/ner/run_ner.py --model_type electra --model_name_or_path google/electra-small-discriminator --do_train --do_eval --do_predict --data_dir /home/corpora/ner/conll2003 --labels /home/corpora/ner/conll2003/labels.txt --num_train_epochs 6 --per_gpu_train_batch_size 256 --per_gpu_eval_batch_size 256 --max_seq_length 128 --output_dir /home/models/electra/conll2003 --evaluate_during_training --save_steps 1000 --logging_steps 1000 --overwrite_output_dir`\r\n\r\nBut got only 83.20% in F1. I know you ran electra small on the actual electra repository, but can you describe what you did in hugging face?", "Hi @petulla , the problem with the BERT paper is, that they've used document context for each token during evaluation. See e.g. this [discussion](https://github.com/allenai/allennlp/pull/2067#issuecomment-443961816) in the AllenNLP repo :)", "@guillaume-be I normally use MIT for all trained models (so should be fine for the fine-tuned ELECTRA model as well) :)", "@pvcastro Try to use a smaller batch size, for example with the configuration:\r\n\r\n```json\r\n{\r\n \"data_dir\": \"./data_en\",\r\n \"labels\": \"./data_en/labels.txt\",\r\n \"model_name_or_path\": \"google/electra-small-discriminator\",\r\n \"output_dir\": \"electra-small-en-1\",\r\n \"max_seq_length\": 128,\r\n \"num_train_epochs\": 5,\r\n \"per_gpu_train_batch_size\": 16,\r\n \"save_steps\": 878,\r\n \"seed\": 1,\r\n \"do_train\": true,\r\n \"do_eval\": true,\r\n \"do_predict\": true,\r\n \"--fp16\": true\r\n}\r\n```\r\n\r\nYou should be able to reach ~88.35% on the test set and 92.13% on development set.\r\n\r\nI just used the latest `master` version of Transformers + saved the JSON-based configuration as `config-electra-small.json`, then you can run training via `python3 run_ner.py config-electra-small.json` :)", "Hi @stefan-it , thanks for the input! I ran this config and got a pretty similar result. I had no idea that a larger batch size had this impact. I get 85% for bs 128. Is this for all transformer models, or for electra only? Or for sequence labeling tasks, perhaps? Do you know why this happens? ", "For Transformer-based model on sequence labeling tasks batch sizes of 8, 16 or 32 are a good choice for hyper-parameter search. So e.g. the [BERT paper](https://arxiv.org/abs/1810.04805) mentions [16. 32] for their experiments (see appendix A.3).\r\n\r\nAnd there's an interesting paper from Reimers and Gurevych about hyper-parameters for lstm-based networks for sequence labeling, but in my opinion their recommendations for batch sizes are also valid for Transformer-based models: [\"Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks\"](https://arxiv.org/abs/1707.06799), see section 7.10 :)", "Thanks @stefan-it , I'll dig into these references! ", "Hi @stefan-it, it is quite strange that I got F1 0 for conll-03 dataset with **electra-large-discriminator**, while 0.88 and 0.91 with **small** and **base** models. The other settings are the same for the three models. Have you encountered this?", "Hi @lkluo , I can't remember fine-tuning instabilities with the ELECTRA models... but could you paste the fine-tuning configuration that you've used πŸ€”\r\n\r\n", "> Hi @lkluo , I can't remember fine-tuning instabilities with the ELECTRA models... but could you paste the fine-tuning configuration that you've used πŸ€”\r\n\r\nThanks @stefan-it. I think I may figure it out after I checked the loss, which converges slowly and which value remains 9.x after 5 epochs. Then I lower the learning rate from default **5e-5** to 10 times smaller, i.e. **5e-6**, then I can get 0.92 score. \r\n\r\nI also fine-tuned with **BERT-large** model using the default learning rate, and I am able to get a reasonable f1 score. Is there any special about **ELECTRA** large settings? Does batch size matter? It is limited to 12 due to GPU in my case. I saw somewhere people suggest larger batch size, smaller learning rate and longer training duration to reproduce good results. Could you share your configuration of **ELECTRA-LARGE**? Thanks a lot?\r\n\r\np.s., my configuration:\r\n\r\n> {\r\n \"data_dir\": \"\",\r\n \"labels\": \"\",\r\n \"model_name_or_path\": \"google/electra-large-discriminator\",\r\n \"output_dir\": \"\",\r\n \"max_seq_length\": 128,\r\n \"num_train_epochs\": 5,\r\n \"per_device_train_batch_size\": 12,\r\n \"save_steps\": 750,\r\n \"seed\": 1,\r\n \"do_train\": true,\r\n \"do_eval\": true,\r\n \"do_predict\": true\r\n}\r\n" ]
1,586
1,599
1,589
CONTRIBUTOR
null
# ❓ Questions & Help Most transformer models in the library can be fine-tuned for NER tasks. In https://huggingface.co/transformers/v2.2.0/examples.html#named-entity-recognition, the performances of the roberta, bert, and distilbert have been reported. However, I did not find performances achieved by other models like XLNet. By any chance, does anyone experiment with other models and can report the performances for models like XLNet and albert on CONLL-2003?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3677/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3676
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3676/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3676/comments
https://api.github.com/repos/huggingface/transformers/issues/3676/events
https://github.com/huggingface/transformers/issues/3676
595,945,439
MDU6SXNzdWU1OTU5NDU0Mzk=
3,676
gpt2-medium fine-tuned model.generate joins words and sentences together without space or newline
{ "login": "albertbn", "id": 13770359, "node_id": "MDQ6VXNlcjEzNzcwMzU5", "avatar_url": "https://avatars.githubusercontent.com/u/13770359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertbn", "html_url": "https://github.com/albertbn", "followers_url": "https://api.github.com/users/albertbn/followers", "following_url": "https://api.github.com/users/albertbn/following{/other_user}", "gists_url": "https://api.github.com/users/albertbn/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertbn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertbn/subscriptions", "organizations_url": "https://api.github.com/users/albertbn/orgs", "repos_url": "https://api.github.com/users/albertbn/repos", "events_url": "https://api.github.com/users/albertbn/events{/privacy}", "received_events_url": "https://api.github.com/users/albertbn/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 2107543444, "node_id": "MDU6TGFiZWwyMTA3NTQzNDQ0", "url": "https://api.github.com/repos/huggingface/transformers/labels/fp16", "name": "fp16", "color": "d93f0b", "default": false, "description": "" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "I can provide a google colab notebook, the fine-tuned model, the training data (or whatever needed) to show the issue. I hope it's a tokenizer issue rather than a training fault on my side, since re-training would cost a lot of cash (the training data is quite big - ~22 million lines)\r\n\r\n ", "I'm suspecting `fp16` to be the reason. Not sure whether this is supported for `generation` yet. @sshleifer - do you know more about this maybe?", "A colab notebook would be great though. Or even better would be if you could upload your model to the community models :-) \r\nThis would make it very easy for us to find the bug:\r\nhttps://huggingface.co/transformers/model_sharing.html", "Yeah some sort of sharing to diagnose. I don't think fp16 is the problem. What does `outputs[0]` look like? ", "hi, \r\n\r\nthanks for your reply, \r\n\r\n[https://huggingface.co/albertbn/gpt2-medium-finetuned-ads-fp16-blocksz512](url)\r\n\r\nthe above is the model, \r\n\r\nyou can re-create the error using the following:\r\n\r\n```\r\nimport torch\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(path)\r\nmodel = GPT2LMHeadModel.from_pretrained(path)\r\n\r\nif torch.cuda.is_available():\r\n model.to('cuda')\r\n\r\ninput_context = '''Find a plumber nearby!'''\r\ninput_ids = torch.tensor(tokenizer.encode(input_context)).unsqueeze(0) \r\nif torch.cuda.is_available():\r\n input_ids = input_ids.cuda()\r\n\r\nmax_length=150; temperature=.175; repetition_penalty=1.3; top_k=70; top_p=0.67\r\noutputs = model.generate(\r\n input_ids=input_ids, max_length=max_length, temperature=temperature, repetition_penalty=repetition_penalty,\r\n bos_token_id=tokenizer.bos_token_id,\r\n top_k=top_k,\r\n top_p=top_p\r\n )\r\n\r\nret = tokenizer.decode(outputs[0], skip_special_tokens=True)\r\n\r\nprint(ret)\r\n\r\n# Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence\r\n# Find a plumber nearby!\r\n# Plumbing Services in Wellington NZ.\r\n# 24/7 Emergency Plumbers Near You, Call Now For Fast Service or Repair of Your Plumbing!Need to Fix Leaking Pipes?β€ŽNZ's #1 Gasfitterℒ️Call Us Today for Expert Advice & The Best Service!Get the Right Gasfitting Solution for your Home. Get It Installed Now - Free Quote Here !{KeyWord:Gas Fitting Installation}Quick And Efficient Installers\r\n\r\n```\r\n\r\nyou can see the issue (lines stuck without \\n) in the last line, starting with: 24/7 Emergency Plumbers...\r\n\r\nthank you in advance,\r\nAlbert", "> Yeah some sort of sharing to diagnose. I don't think fp16 is the problem. What does `outputs[0]` look like?\r\n\r\noutputs[0] for the example I've posted looks like this:\r\n\r\n```\r\ntensor([16742, 257, 458, 4494, 6716, 0, 198, 3646, 28149, 6168,\r\n 287, 30597, 26905, 13, 198, 1731, 14, 22, 18154, 1345,\r\n 17024, 20173, 921, 11, 4889, 2735, 1114, 12549, 4809, 393,\r\n 28912, 286, 3406, 1345, 28149, 0, 23037, 284, 13268, 1004,\r\n 868, 350, 18636, 30, 48261, 37371, 338, 1303, 16, 14345,\r\n 69, 1967, 8151, 37929, 14134, 4021, 6288, 329, 25516, 42708,\r\n 1222, 383, 6705, 4809, 0, 3855, 262, 6498, 14345, 32232,\r\n 28186, 329, 534, 5995, 13, 3497, 632, 2262, 4262, 2735,\r\n 532, 3232, 19879, 3423, 5145, 90, 9218, 26449, 25, 39699,\r\n 376, 2535, 32588, 92, 21063, 843, 412, 5632, 15545, 364],\r\n device='cuda:0')\r\n```\r\nthere is no white space separating 0, 23037 (23037 is the only index in the output: Plumbing**!Need** )\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Regarding the second question\r\n\r\n> 2. I get a 'warning':\r\n> Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence\r\n\r\nIt's explained [here](https://jaketae.github.io/study/gpt2/#setup): \"For open-end generation, HuggingFace will set the padding token ID to be equal to the end-of-sentence token ID\". Code is here: https://github.com/huggingface/transformers/blob/b880508440f43f80e35a78ccd2a32f3bde91cb23/src/transformers/generation_utils.py#L410-L414" ]
1,586
1,615
1,596
NONE
null
Hi, I have successfully fine-tuned and used a gpt2 model to generate text. My training corpus consist of short sentences - 3-5 words and longer ones 10-15 words. All separated by new line character. Sometimes ending with [ . ! ? ] sometimes not `outputs = model.generate( input_ids=input_ids, max_length=max_length, temperature=temperature, repetition_penalty=repetition_penalty, bos_token_id=tokenizer.bos_token_id, top_k=top_k, top_p=top_p )` `ret = tokenizer.decode(outputs[0], skip_special_tokens=True)` Then I fine-tuned a gpt2-medium model. The training corpus was slightly different, but structured the same as described above. I had to use --fp16 and --block_size=512 to fit in the GPU memory limits. The result: using the fine-tuned a gpt2-medium model, I am experiencing a couple of issues: 1. I get frequent issues with lines or words stuck, without any new line or space: example: **word1Word2Word3** or: **line 1 with some words!Another line with some wordsℒ️Next line...** 2. I get a 'warning': Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence I've tried playing with the decode parameters with no luck: `ret = tokenizer.decode(outputs[0], skip_special_tokens=False, clean_up_tokenization_spaces=False)` Help appreciated, thanks in advance, Albert
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3676/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3676/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3675
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3675/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3675/comments
https://api.github.com/repos/huggingface/transformers/issues/3675/events
https://github.com/huggingface/transformers/issues/3675
595,921,356
MDU6SXNzdWU1OTU5MjEzNTY=
3,675
Wrong tokenizer configuration in sentiment-analysis pipeline
{ "login": "leonhardhennig", "id": 8458299, "node_id": "MDQ6VXNlcjg0NTgyOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/8458299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leonhardhennig", "html_url": "https://github.com/leonhardhennig", "followers_url": "https://api.github.com/users/leonhardhennig/followers", "following_url": "https://api.github.com/users/leonhardhennig/following{/other_user}", "gists_url": "https://api.github.com/users/leonhardhennig/gists{/gist_id}", "starred_url": "https://api.github.com/users/leonhardhennig/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leonhardhennig/subscriptions", "organizations_url": "https://api.github.com/users/leonhardhennig/orgs", "repos_url": "https://api.github.com/users/leonhardhennig/repos", "events_url": "https://api.github.com/users/leonhardhennig/events{/privacy}", "received_events_url": "https://api.github.com/users/leonhardhennig/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1771187924, "node_id": "MDU6TGFiZWwxNzcxMTg3OTI0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline", "name": "Core: Pipeline", "color": "FF7066", "default": false, "description": "Internals of the library; Pipeline." } ]
closed
false
null
[]
[ "I'm having the same issue", "> I'm having the same issue\r\n\r\nI got this working by using the following code:\r\n```\r\n # Allocate a pipeline for sentiment-analysis\r\n nlp = pipeline(\"sentiment-analysis\")\r\n nlp.tokenizer = transformers.DistilBertTokenizer.from_pretrained(\"**distilbert-base-uncased**\")\r\n```\r\n\r\nThanks for pointing me in the right direction LysandreJik!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,592
1,592
NONE
null
# πŸ› Bug ## Information When following the Pipelines Notebook 03-pipelines.ipynb, Sentiment Analysis tasks gives wrong result ("NEGATIVE") for example 'Such a nice weather outside !'. ``` nlp_sentence_classif = pipeline('sentiment-analysis') nlp_sentence_classif('Such a nice weather outside !') [{'label': 'NEGATIVE', 'score': 0.97545063}] ``` Probable reason: pipelines.py configuration uses uncased model, but cased tokenizer. Tokenizer should probably be 'distilbert-base-uncased'. ``` "sentiment-analysis": { "impl": TextClassificationPipeline, "tf": TFAutoModelForSequenceClassification if is_tf_available() else None, "pt": AutoModelForSequenceClassification if is_torch_available() else None, "default": { "model": { "pt": "distilbert-base-uncased-finetuned-sst-2-english", "tf": "distilbert-base-uncased-finetuned-sst-2-english", }, "config": "distilbert-base-uncased-finetuned-sst-2-english", "tokenizer": "distilbert-base-cased", }, }, ``` Model I am using (Bert, XLNet ...): distilbert-base-uncased-finetuned-sst-2-english (preconfigured sentiment-analysis pipeline) Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: 03-pipelines.ipynb * [ ] my own modified scripts: (give details below) ## Expected behavior Example sentence should be labeled as POSITIVE. ## Environment info - `transformers` version: 2.8.0 - Platform: Linux Mint - Python version: 3.7 - PyTorch version (GPU?): 1.4.0 (no) - Tensorflow version (GPU?): 2.1.0 (no) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3675/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3675/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3674
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3674/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3674/comments
https://api.github.com/repos/huggingface/transformers/issues/3674/events
https://github.com/huggingface/transformers/pull/3674
595,848,801
MDExOlB1bGxSZXF1ZXN0NDAwMjUxMzMy
3,674
[Examples, Benchmark] Improve benchmark utils
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,586
1,586
1,586
MEMBER
null
This PR improves the `benchmarks.py` file a bit: - "results, memory" are renamed to "time, memory" - all print statements can optionally be saved in a log file - the CSV file output format is improved - better naming in general
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3674/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3674/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3674", "html_url": "https://github.com/huggingface/transformers/pull/3674", "diff_url": "https://github.com/huggingface/transformers/pull/3674.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3674.patch", "merged_at": 1586291157000 }
https://api.github.com/repos/huggingface/transformers/issues/3673
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3673/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3673/comments
https://api.github.com/repos/huggingface/transformers/issues/3673/events
https://github.com/huggingface/transformers/issues/3673
595,748,049
MDU6SXNzdWU1OTU3NDgwNDk=
3,673
TypeError while loading the model built from scratch using transformer
{ "login": "ishaansharma", "id": 8963395, "node_id": "MDQ6VXNlcjg5NjMzOTU=", "avatar_url": "https://avatars.githubusercontent.com/u/8963395?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ishaansharma", "html_url": "https://github.com/ishaansharma", "followers_url": "https://api.github.com/users/ishaansharma/followers", "following_url": "https://api.github.com/users/ishaansharma/following{/other_user}", "gists_url": "https://api.github.com/users/ishaansharma/gists{/gist_id}", "starred_url": "https://api.github.com/users/ishaansharma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ishaansharma/subscriptions", "organizations_url": "https://api.github.com/users/ishaansharma/orgs", "repos_url": "https://api.github.com/users/ishaansharma/repos", "events_url": "https://api.github.com/users/ishaansharma/events{/privacy}", "received_events_url": "https://api.github.com/users/ishaansharma/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "looks like it's not able to find vocabulary file. Make sure there is a vocab.txt file for bert. Otherwise, you can simply load it by `tokenizer = BertTokenizer(vocab_file=\"path to vocab\", and configs)`.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,593
1,593
NONE
null
# πŸ› Bug > TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-33-cd040b700e71> in <module>() 3 from transformers import BertTokenizer, AdamW, BertForNextSentencePrediction 4 ----> 5 tokenizer = BertTokenizer.from_pretrained('/content/drive/My Drive/Colab Notebooks/data/test/') 3 frames /usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py in from_pretrained(cls, *inputs, **kwargs) 391 392 """ --> 393 return cls._from_pretrained(*inputs, **kwargs) 394 395 @classmethod /usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 542 # Instantiate tokenizer. 543 try: --> 544 tokenizer = cls(*init_inputs, **init_kwargs) 545 except OSError: 546 raise OSError( /usr/local/lib/python3.6/dist-packages/transformers/tokenization_bert.py in __init__(self, vocab_file, do_lower_case, do_basic_tokenize, never_split, unk_token, sep_token, pad_token, cls_token, mask_token, tokenize_chinese_chars, **kwargs) 186 self.max_len_sentences_pair = self.max_len - 3 # take into account special tokens 187 --> 188 if not os.path.isfile(vocab_file): 189 raise ValueError( 190 "Can't find a vocabulary file at path '{}'. To load the vocabulary from a Google pretrained " /usr/lib/python3.6/genericpath.py in isfile(path) 28 """Test whether a path is a regular file""" 29 try: ---> 30 st = os.stat(path) 31 except OSError: 32 return False TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType ## Information I am trying to fine-tune the model that I built from scratch using transformers. When I am trying to load the tokenizer from the model that is just made, it is giving Type Error Model I am using (Bert, XLNet ...): Model is built from scratch using https://huggingface.co/blog/how-to-train Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ``` import torch import transformers from transformers import BertTokenizer, AdamW, BertForNextSentencePrediction tokenizer = BertTokenizer.from_pretrained('/content/drive/My Drive/Colab Notebooks/data/model/') ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Google Colab - Python version: 3.x - PyTorch version (GPU?):'1.4.0' - Tensorflow version (GPU?):'2.8.0' - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3673/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3673/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3672
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3672/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3672/comments
https://api.github.com/repos/huggingface/transformers/issues/3672/events
https://github.com/huggingface/transformers/issues/3672
595,714,199
MDU6SXNzdWU1OTU3MTQxOTk=
3,672
How to train BART text summarization with your own data?
{ "login": "thedrowsywinger", "id": 23182970, "node_id": "MDQ6VXNlcjIzMTgyOTcw", "avatar_url": "https://avatars.githubusercontent.com/u/23182970?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thedrowsywinger", "html_url": "https://github.com/thedrowsywinger", "followers_url": "https://api.github.com/users/thedrowsywinger/followers", "following_url": "https://api.github.com/users/thedrowsywinger/following{/other_user}", "gists_url": "https://api.github.com/users/thedrowsywinger/gists{/gist_id}", "starred_url": "https://api.github.com/users/thedrowsywinger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thedrowsywinger/subscriptions", "organizations_url": "https://api.github.com/users/thedrowsywinger/orgs", "repos_url": "https://api.github.com/users/thedrowsywinger/repos", "events_url": "https://api.github.com/users/thedrowsywinger/events{/privacy}", "received_events_url": "https://api.github.com/users/thedrowsywinger/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "You can try to move transformer_base file to the same location of run_bart_sum.py.", "Hi,\r\n1. As written in `README.md`\r\n> \"To use your own data, copy that files format. Each article to be summarized is on its own line.\"\r\n\r\n I think you should insert in cnn_dm folder your files renamed `train.source`, `train.target`, `test.source`, `test.target`, `val.source`, `val.target`, where in each file you have respectively a source text and a target text per line.\r\n\r\n2. You are not using the script `run_train.sh`, as suggested in the `README.md`. In the `run_train.sh` there are a series of export commands that you missed. \r\nThe last one should fix your issue.\r\nHope it helps.\r\n\r\n ```export OUTPUT_DIR_NAME=bart_sum\r\nexport CURRENT_DIR=${PWD}\r\nexport OUTPUT_DIR=${CURRENT_DIR}/${OUTPUT_DIR_NAME}\r\n\r\n# Make output directory if it doesn't exist\r\nmkdir -p $OUTPUT_DIR\r\n\r\n#Add parent directory to python path to access transformer_base.py\r\nexport PYTHONPATH=\"../../\":\"${PYTHONPATH}\"```\r\n\r\n", "Closing, @teelinsan 's answer is correct. " ]
1,586
1,587
1,587
NONE
null
# ❓ Questions & Help ## Details This is actually a two part question. I have noticed that in [1](https://github.com/huggingface/transformers/blob/master/examples/summarization/bart/run_train.sh) instructions have been given to train with the cnn/dm data. How would we train it with our own data? Should the file format be .story? And secondly how exactly do we handle the python path in google colab? ![image](https://user-images.githubusercontent.com/23182970/78650840-49d6a580-78e1-11ea-8796-d4267bf44714.png) I have tried in both these ways and failed. Link to this question in SO: [2] (https://stackoverflow.com/questions/61058171/no-module-named-transformer-base/61070453#61070453)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3672/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3671
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3671/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3671/comments
https://api.github.com/repos/huggingface/transformers/issues/3671/events
https://github.com/huggingface/transformers/issues/3671
595,714,016
MDU6SXNzdWU1OTU3MTQwMTY=
3,671
Loading pre-trained ELECTRA checkpoint to HuggingFace
{ "login": "DevKretov", "id": 38000417, "node_id": "MDQ6VXNlcjM4MDAwNDE3", "avatar_url": "https://avatars.githubusercontent.com/u/38000417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DevKretov", "html_url": "https://github.com/DevKretov", "followers_url": "https://api.github.com/users/DevKretov/followers", "following_url": "https://api.github.com/users/DevKretov/following{/other_user}", "gists_url": "https://api.github.com/users/DevKretov/gists{/gist_id}", "starred_url": "https://api.github.com/users/DevKretov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DevKretov/subscriptions", "organizations_url": "https://api.github.com/users/DevKretov/orgs", "repos_url": "https://api.github.com/users/DevKretov/repos", "events_url": "https://api.github.com/users/DevKretov/events{/privacy}", "received_events_url": "https://api.github.com/users/DevKretov/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi! I don't really understand how you obtained what you did, what script did you use, what arguments did you put in? The procedure to convert an ELECTRA checkpoint from the official implementation to our implementation is to do the following (feel free to skip the first steps if you already have your checkpoint):\r\n\r\n```bash\r\n# Get a checkpoint\r\nwget https://storage.googleapis.com/electra-data/electra_small.zip \r\n\r\n# Unzip it\r\nunzip electra_small.zip \r\n\r\n# Get an appropriate configuration file for your model (see below)\r\nvim electra_small/config.json\r\n\r\n# Run the script\r\npython $TRANSFORMERS/src/transformers/convert_electra_original_tf_checkpoint_to_pytorch.py \\ \r\n --tf_checkpoint_path=./electra_small/electra_small \\\r\n --config_file=./electra_small/config.json \\\r\n --pytorch_dump_path=pytorch_model.bin \\\r\n --discriminator_or_generator=discriminator\r\n\r\n```\r\n\r\nFrom this you should get the following output:\r\n\r\n```bash\r\nInitialize PyTorch weight ['discriminator_predictions', 'dense', 'bias'] discriminator_predictions/dense/bias\r\nInitialize PyTorch weight ['discriminator_predictions', 'dense', 'kernel'] discriminator_predictions/dense/kernel\r\nInitialize PyTorch weight ['discriminator_predictions', 'dense_prediction', 'bias'] discriminator_predictions/dense_1/bias\r\nInitialize PyTorch weight ['discriminator_predictions', 'dense_prediction', 'kernel'] discriminator_predictions/dense_1/kernel\r\nInitialize PyTorch weight ['electra', 'embeddings', 'LayerNorm', 'beta'] electra/embeddings/LayerNorm/beta\r\nInitialize PyTorch weight ['electra', 'embeddings', 'LayerNorm', 'gamma'] electra/embeddings/LayerNorm/gamma\r\n[...]\r\nSkipping generator_predictions/dense/bias ['generator_predictions', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator_predictions'\r\nSkipping generator_predictions/dense/kernel ['generator_predictions', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator_predictions'\r\nSkipping generator_predictions/output_bias ['generator_lm_head', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator_lm_head'\r\nINFO:transformers.modeling_electra:Skipping generator_predictions/temperature\r\nINFO:transformers.modeling_electra:Skipping global_step\r\nSave PyTorch model to pytorch_model.bin\r\n```\r\n\r\nWhich tells you that it ignored the generator layers, but saved the discriminator layers :).\r\n\r\nThe tricky part here is to craft a configuration file specific to the model. I want to obtain the small discriminator from this checkpoint, so the configuration file is the following:\r\n\r\n```json\r\n{\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"hidden_size\": 256,\r\n \"intermediate_size\": 1024,\r\n \"num_attention_heads\": 4,\r\n \"num_hidden_layers\": 12,\r\n \"embedding_size\": 128,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"initializer_range\": 0.02,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 30522\r\n}\r\n```\r\n\r\nYou can either write it yourself or instantiate it from a `transformers.ElectraConfig` and save it as a JSON file.", "@LysandreJik I used your [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_electra_original_tf_checkpoint_to_pytorch.py) to convert the trained model from the [origin repo](https://github.com/google-research/electra) (training on my own data) and it worked. I wonder whether an equal technique to convert this trained model to the Electra tf2 model that implemented in HuggingFace?", "@nguyenvulebinh glad the script worked! The script only outputs a PyTorch model but it's very simple to convert that model to TF2. Once you have the converted model, you can then load it in TensorFlow by specifying the `from_pt` option:\r\n\r\n```py\r\nfrom transformers import TFElectraForPreTraining\r\n\r\nmodel = TFElectraForPreTraining.from_pretrained(\"directory\", from_pt=True)\r\n```\r\n\r\nYou can then save that model in `.h5` format so that it gets natively loaded by TensorFlow in the future:\r\n\r\n```py\r\nmodel.save_pretrained(\"directory-tf\")\r\n\r\n# Can now load directly from TensorFlow without the `from_pt` option:\r\nmodel = TFElectraForPreTraining.from_pretrained(\"directory-tf\")\r\n```", "@LysandreJik It's really cool! Thank you! I did it 😍", "@LysandreJik \r\n\r\nHi,\r\n\r\nI have a question on pre-training Electra using the PyTorch base model.\r\n\r\nIf I want to continue pretraining the Electra model (HuggingFace implementation) on a domain-specific corpus, which model should I use to initialize - the generator or discriminator?\r\n\r\nThanks!", "When using the ELECTRA method what you're really interested in is the discriminator.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,592
1,592
NONE
null
# ❓ Questions & Help Hello everyone! I have been struggling with HuggingFace interface for loading ELECTRA model via transformers.TFElectraModel class. Since TF version of ElectraModel didn't manage to help me restore the checkpoint from the official Google Research implementation (they save only .ckpl files) due to this error: ` NotImplementedError: Weights may only be loaded based on topology into Models when loading TensorFlow-formatted weights (got by_name=True to load_weights). ` However, the normal ElectraModel.from_pretrained() procedure managed to load my model, writing this to the stdout: ``` Skipping discriminator_predictions/dense/bias ['discriminator_predictions', 'dense', 'bias'] 'ElectraModel' object has no attribute 'discriminator_predictions' Skipping discriminator_predictions/dense/bias/adam_m ['discriminator_predictions', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'discriminator_predictions' Skipping discriminator_predictions/dense/bias/adam_v ['discriminator_predictions', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'discriminator_predictions' Skipping discriminator_predictions/dense/kernel ['discriminator_predictions', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'discriminator_predictions' Skipping discriminator_predictions/dense/kernel/adam_m ['discriminator_predictions', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'discriminator_predictions' Skipping discriminator_predictions/dense/kernel/adam_v ['discriminator_predictions', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'discriminator_predictions' Skipping discriminator_predictions/dense_1/bias ['discriminator_predictions', 'dense_prediction', 'bias'] 'ElectraModel' object has no attribute 'discriminator_predictions' Skipping discriminator_predictions/dense_1/bias/adam_m ['discriminator_predictions', 'dense_prediction', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'discriminator_predictions' Skipping discriminator_predictions/dense_1/bias/adam_v ['discriminator_predictions', 'dense_prediction', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'discriminator_predictions' Skipping discriminator_predictions/dense_1/kernel ['discriminator_predictions', 'dense_prediction', 'kernel'] 'ElectraModel' object has no attribute 'discriminator_predictions' Skipping discriminator_predictions/dense_1/kernel/adam_m ['discriminator_predictions', 'dense_prediction', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'discriminator_predictions' Skipping discriminator_predictions/dense_1/kernel/adam_v ['discriminator_predictions', 'dense_prediction', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'discriminator_predictions' Skipping electra/embeddings/LayerNorm/beta ['electra', 'embeddings', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/embeddings/LayerNorm/beta/adam_m ['electra', 'embeddings', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/embeddings/LayerNorm/beta/adam_v ['electra', 'embeddings', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/embeddings/LayerNorm/gamma ['electra', 'embeddings', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/embeddings/LayerNorm/gamma/adam_m ['electra', 'embeddings', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/embeddings/LayerNorm/gamma/adam_v ['electra', 'embeddings', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/embeddings/position_embeddings ['electra', 'embeddings', 'position_embeddings'] 'ElectraModel' object has no attribute 'electra' Skipping electra/embeddings/position_embeddings/adam_m ['electra', 'embeddings', 'position_embeddings', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/embeddings/position_embeddings/adam_v ['electra', 'embeddings', 'position_embeddings', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/embeddings/token_type_embeddings ['electra', 'embeddings', 'token_type_embeddings'] 'ElectraModel' object has no attribute 'electra' Skipping electra/embeddings/token_type_embeddings/adam_m ['electra', 'embeddings', 'token_type_embeddings', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/embeddings/token_type_embeddings/adam_v ['electra', 'embeddings', 'token_type_embeddings', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/embeddings/word_embeddings ['electra', 'embeddings', 'word_embeddings'] 'ElectraModel' object has no attribute 'electra' Skipping electra/embeddings/word_embeddings/adam_m ['electra', 'embeddings', 'word_embeddings', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/embeddings/word_embeddings/adam_v ['electra', 'embeddings', 'word_embeddings', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/output/dense/bias ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/output/dense/kernel ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/key/bias ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/key/kernel ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/query/bias ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/query/kernel ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/value/bias ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/value/kernel ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/intermediate/dense/bias ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/intermediate/dense/kernel ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/output/LayerNorm/beta ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/output/LayerNorm/gamma ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/output/dense/bias ['electra', 'encoder', 'layer_0', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/output/dense/bias/adam_m ['electra', 'encoder', 'layer_0', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/output/dense/bias/adam_v ['electra', 'encoder', 'layer_0', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/output/dense/kernel ['electra', 'encoder', 'layer_0', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_0', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_0/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_0', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/output/dense/bias ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/output/dense/kernel ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/key/bias ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/key/kernel ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/query/bias ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/query/kernel ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/value/bias ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/value/kernel ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/intermediate/dense/bias ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/intermediate/dense/kernel ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/output/LayerNorm/beta ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/output/LayerNorm/gamma ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/output/dense/bias ['electra', 'encoder', 'layer_1', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/output/dense/bias/adam_m ['electra', 'encoder', 'layer_1', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/output/dense/bias/adam_v ['electra', 'encoder', 'layer_1', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/output/dense/kernel ['electra', 'encoder', 'layer_1', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_1', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_1/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_1', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/output/dense/bias ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/output/dense/kernel ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/key/bias ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/key/kernel ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/query/bias ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/query/kernel ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/value/bias ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/value/kernel ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/intermediate/dense/bias ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/intermediate/dense/kernel ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/output/LayerNorm/beta ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/output/LayerNorm/gamma ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/output/dense/bias ['electra', 'encoder', 'layer_10', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/output/dense/bias/adam_m ['electra', 'encoder', 'layer_10', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/output/dense/bias/adam_v ['electra', 'encoder', 'layer_10', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/output/dense/kernel ['electra', 'encoder', 'layer_10', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_10', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_10/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_10', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/output/dense/bias ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/output/dense/kernel ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/key/bias ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/key/kernel ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/query/bias ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/query/kernel ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/value/bias ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/value/kernel ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/intermediate/dense/bias ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/intermediate/dense/kernel ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/output/LayerNorm/beta ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/output/LayerNorm/gamma ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/output/dense/bias ['electra', 'encoder', 'layer_11', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/output/dense/bias/adam_m ['electra', 'encoder', 'layer_11', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/output/dense/bias/adam_v ['electra', 'encoder', 'layer_11', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/output/dense/kernel ['electra', 'encoder', 'layer_11', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_11', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_11/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_11', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/output/dense/bias ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/output/dense/kernel ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/key/bias ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/key/kernel ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/query/bias ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/query/kernel ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/value/bias ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/value/kernel ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/intermediate/dense/bias ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/intermediate/dense/kernel ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/output/LayerNorm/beta ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/output/LayerNorm/gamma ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/output/dense/bias ['electra', 'encoder', 'layer_2', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/output/dense/bias/adam_m ['electra', 'encoder', 'layer_2', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/output/dense/bias/adam_v ['electra', 'encoder', 'layer_2', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/output/dense/kernel ['electra', 'encoder', 'layer_2', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_2', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_2/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_2', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/output/dense/bias ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/output/dense/kernel ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/key/bias ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/key/kernel ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/query/bias ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/query/kernel ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/value/bias ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/value/kernel ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/intermediate/dense/bias ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/intermediate/dense/kernel ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/output/LayerNorm/beta ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/output/LayerNorm/gamma ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/output/dense/bias ['electra', 'encoder', 'layer_3', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/output/dense/bias/adam_m ['electra', 'encoder', 'layer_3', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/output/dense/bias/adam_v ['electra', 'encoder', 'layer_3', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/output/dense/kernel ['electra', 'encoder', 'layer_3', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_3', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_3/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_3', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/output/dense/bias ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/output/dense/kernel ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/key/bias ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/key/kernel ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/query/bias ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/query/kernel ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/value/bias ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/value/kernel ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/intermediate/dense/bias ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/intermediate/dense/kernel ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/output/LayerNorm/beta ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/output/LayerNorm/gamma ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/output/dense/bias ['electra', 'encoder', 'layer_4', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/output/dense/bias/adam_m ['electra', 'encoder', 'layer_4', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/output/dense/bias/adam_v ['electra', 'encoder', 'layer_4', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/output/dense/kernel ['electra', 'encoder', 'layer_4', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_4', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_4/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_4', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/output/dense/bias ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/output/dense/kernel ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/key/bias ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/key/kernel ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/query/bias ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/query/kernel ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/value/bias ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/value/kernel ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/intermediate/dense/bias ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/intermediate/dense/kernel ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/output/LayerNorm/beta ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/output/LayerNorm/gamma ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/output/dense/bias ['electra', 'encoder', 'layer_5', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/output/dense/bias/adam_m ['electra', 'encoder', 'layer_5', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/output/dense/bias/adam_v ['electra', 'encoder', 'layer_5', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/output/dense/kernel ['electra', 'encoder', 'layer_5', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_5', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_5/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_5', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/output/dense/bias ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/output/dense/kernel ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/key/bias ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/key/kernel ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/query/bias ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/query/kernel ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/value/bias ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/value/kernel ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/intermediate/dense/bias ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/intermediate/dense/kernel ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/output/LayerNorm/beta ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/output/LayerNorm/gamma ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/output/dense/bias ['electra', 'encoder', 'layer_6', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/output/dense/bias/adam_m ['electra', 'encoder', 'layer_6', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/output/dense/bias/adam_v ['electra', 'encoder', 'layer_6', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/output/dense/kernel ['electra', 'encoder', 'layer_6', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_6', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_6/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_6', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/output/dense/bias ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/output/dense/kernel ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/key/bias ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/key/kernel ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/query/bias ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/query/kernel ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/value/bias ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/value/kernel ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/intermediate/dense/bias ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/intermediate/dense/kernel ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/output/LayerNorm/beta ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/output/LayerNorm/gamma ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/output/dense/bias ['electra', 'encoder', 'layer_7', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/output/dense/bias/adam_m ['electra', 'encoder', 'layer_7', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/output/dense/bias/adam_v ['electra', 'encoder', 'layer_7', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/output/dense/kernel ['electra', 'encoder', 'layer_7', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_7', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_7/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_7', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/output/dense/bias ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/output/dense/kernel ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/key/bias ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/key/kernel ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/query/bias ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/query/kernel ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/value/bias ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/value/kernel ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/intermediate/dense/bias ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/intermediate/dense/kernel ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/output/LayerNorm/beta ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/output/LayerNorm/gamma ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/output/dense/bias ['electra', 'encoder', 'layer_8', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/output/dense/bias/adam_m ['electra', 'encoder', 'layer_8', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/output/dense/bias/adam_v ['electra', 'encoder', 'layer_8', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/output/dense/kernel ['electra', 'encoder', 'layer_8', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_8', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_8/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_8', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/output/dense/bias ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/output/dense/kernel ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/key/bias ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/key/kernel ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/query/bias ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/query/kernel ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/value/bias ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/value/kernel ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/intermediate/dense/bias ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/intermediate/dense/kernel ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/output/LayerNorm/beta ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/output/LayerNorm/gamma ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/output/dense/bias ['electra', 'encoder', 'layer_9', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/output/dense/bias/adam_m ['electra', 'encoder', 'layer_9', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/output/dense/bias/adam_v ['electra', 'encoder', 'layer_9', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/output/dense/kernel ['electra', 'encoder', 'layer_9', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_9', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra' Skipping electra/encoder/layer_9/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_9', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra' Skipping generator/embeddings_project/bias ['generator', 'embeddings_project', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/embeddings_project/bias/adam_m ['generator', 'embeddings_project', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/embeddings_project/bias/adam_v ['generator', 'embeddings_project', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/embeddings_project/kernel ['generator', 'embeddings_project', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/embeddings_project/kernel/adam_m ['generator', 'embeddings_project', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/embeddings_project/kernel/adam_v ['generator', 'embeddings_project', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/dense/bias ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/dense/kernel ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/key/bias ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/key/kernel ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/query/bias ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/query/kernel ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/value/bias ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/value/kernel ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/intermediate/dense/bias ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/intermediate/dense/kernel ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/LayerNorm/beta ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/LayerNorm/gamma ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/dense/bias ['generator', 'encoder', 'layer_0', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/dense/bias/adam_m ['generator', 'encoder', 'layer_0', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/dense/bias/adam_v ['generator', 'encoder', 'layer_0', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/dense/kernel ['generator', 'encoder', 'layer_0', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_0', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_0/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_0', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/dense/bias ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/dense/kernel ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/key/bias ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/key/kernel ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/query/bias ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/query/kernel ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/value/bias ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/value/kernel ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/intermediate/dense/bias ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/intermediate/dense/kernel ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/LayerNorm/beta ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/LayerNorm/gamma ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/dense/bias ['generator', 'encoder', 'layer_1', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/dense/bias/adam_m ['generator', 'encoder', 'layer_1', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/dense/bias/adam_v ['generator', 'encoder', 'layer_1', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/dense/kernel ['generator', 'encoder', 'layer_1', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_1', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_1/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_1', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/dense/bias ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/dense/kernel ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/key/bias ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/key/kernel ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/query/bias ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/query/kernel ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/value/bias ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/value/kernel ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/intermediate/dense/bias ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/intermediate/dense/kernel ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/LayerNorm/beta ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/LayerNorm/gamma ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/dense/bias ['generator', 'encoder', 'layer_10', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/dense/bias/adam_m ['generator', 'encoder', 'layer_10', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/dense/bias/adam_v ['generator', 'encoder', 'layer_10', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/dense/kernel ['generator', 'encoder', 'layer_10', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_10', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_10/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_10', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/dense/bias ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/dense/kernel ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/key/bias ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/key/kernel ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/query/bias ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/query/kernel ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/value/bias ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/value/kernel ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/intermediate/dense/bias ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/intermediate/dense/kernel ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/LayerNorm/beta ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/LayerNorm/gamma ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/dense/bias ['generator', 'encoder', 'layer_11', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/dense/bias/adam_m ['generator', 'encoder', 'layer_11', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/dense/bias/adam_v ['generator', 'encoder', 'layer_11', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/dense/kernel ['generator', 'encoder', 'layer_11', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_11', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_11/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_11', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/dense/bias ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/dense/kernel ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/key/bias ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/key/kernel ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/query/bias ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/query/kernel ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/value/bias ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/value/kernel ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/intermediate/dense/bias ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/intermediate/dense/kernel ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/LayerNorm/beta ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/LayerNorm/gamma ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/dense/bias ['generator', 'encoder', 'layer_2', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/dense/bias/adam_m ['generator', 'encoder', 'layer_2', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/dense/bias/adam_v ['generator', 'encoder', 'layer_2', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/dense/kernel ['generator', 'encoder', 'layer_2', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_2', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_2/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_2', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/dense/bias ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/dense/kernel ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/key/bias ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/key/kernel ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/query/bias ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/query/kernel ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/value/bias ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/value/kernel ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/intermediate/dense/bias ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/intermediate/dense/kernel ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/LayerNorm/beta ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/LayerNorm/gamma ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/dense/bias ['generator', 'encoder', 'layer_3', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/dense/bias/adam_m ['generator', 'encoder', 'layer_3', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/dense/bias/adam_v ['generator', 'encoder', 'layer_3', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/dense/kernel ['generator', 'encoder', 'layer_3', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_3', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_3/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_3', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/dense/bias ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/dense/kernel ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/key/bias ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/key/kernel ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/query/bias ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/query/kernel ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/value/bias ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/value/kernel ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/intermediate/dense/bias ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/intermediate/dense/kernel ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/LayerNorm/beta ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/LayerNorm/gamma ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/dense/bias ['generator', 'encoder', 'layer_4', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/dense/bias/adam_m ['generator', 'encoder', 'layer_4', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/dense/bias/adam_v ['generator', 'encoder', 'layer_4', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/dense/kernel ['generator', 'encoder', 'layer_4', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_4', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_4/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_4', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/dense/bias ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/dense/kernel ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/key/bias ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/key/kernel ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/query/bias ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/query/kernel ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/value/bias ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/value/kernel ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/intermediate/dense/bias ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/intermediate/dense/kernel ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/LayerNorm/beta ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/LayerNorm/gamma ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/dense/bias ['generator', 'encoder', 'layer_5', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/dense/bias/adam_m ['generator', 'encoder', 'layer_5', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/dense/bias/adam_v ['generator', 'encoder', 'layer_5', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/dense/kernel ['generator', 'encoder', 'layer_5', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_5', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_5/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_5', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/dense/bias ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/dense/kernel ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/key/bias ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/key/kernel ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/query/bias ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/query/kernel ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/value/bias ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/value/kernel ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/intermediate/dense/bias ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/intermediate/dense/kernel ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/LayerNorm/beta ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/LayerNorm/gamma ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/dense/bias ['generator', 'encoder', 'layer_6', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/dense/bias/adam_m ['generator', 'encoder', 'layer_6', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/dense/bias/adam_v ['generator', 'encoder', 'layer_6', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/dense/kernel ['generator', 'encoder', 'layer_6', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_6', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_6/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_6', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/dense/bias ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/dense/kernel ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/key/bias ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/key/kernel ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/query/bias ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/query/kernel ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/value/bias ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/value/kernel ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/intermediate/dense/bias ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/intermediate/dense/kernel ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/LayerNorm/beta ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/LayerNorm/gamma ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/dense/bias ['generator', 'encoder', 'layer_7', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/dense/bias/adam_m ['generator', 'encoder', 'layer_7', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/dense/bias/adam_v ['generator', 'encoder', 'layer_7', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/dense/kernel ['generator', 'encoder', 'layer_7', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_7', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_7/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_7', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/dense/bias ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/dense/kernel ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/key/bias ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/key/kernel ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/query/bias ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/query/kernel ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/value/bias ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/value/kernel ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/intermediate/dense/bias ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/intermediate/dense/kernel ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/LayerNorm/beta ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/LayerNorm/gamma ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/dense/bias ['generator', 'encoder', 'layer_8', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/dense/bias/adam_m ['generator', 'encoder', 'layer_8', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/dense/bias/adam_v ['generator', 'encoder', 'layer_8', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/dense/kernel ['generator', 'encoder', 'layer_8', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_8', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_8/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_8', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/dense/bias ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/dense/kernel ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/key/bias ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/key/kernel ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/query/bias ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/query/kernel ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/value/bias ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/value/kernel ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/intermediate/dense/bias ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/intermediate/dense/kernel ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/LayerNorm/beta ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/LayerNorm/gamma ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/dense/bias ['generator', 'encoder', 'layer_9', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/dense/bias/adam_m ['generator', 'encoder', 'layer_9', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/dense/bias/adam_v ['generator', 'encoder', 'layer_9', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/dense/kernel ['generator', 'encoder', 'layer_9', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_9', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator' Skipping generator/encoder/layer_9/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_9', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator' Skipping generator_predictions/LayerNorm/beta ['generator_predictions', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator_predictions' Skipping generator_predictions/LayerNorm/beta/adam_m ['generator_predictions', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator_predictions' Skipping generator_predictions/LayerNorm/beta/adam_v ['generator_predictions', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator_predictions' Skipping generator_predictions/LayerNorm/gamma ['generator_predictions', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator_predictions' Skipping generator_predictions/LayerNorm/gamma/adam_m ['generator_predictions', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator_predictions' Skipping generator_predictions/LayerNorm/gamma/adam_v ['generator_predictions', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator_predictions' Skipping generator_predictions/dense/bias ['generator_predictions', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator_predictions' Skipping generator_predictions/dense/bias/adam_m ['generator_predictions', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator_predictions' Skipping generator_predictions/dense/bias/adam_v ['generator_predictions', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator_predictions' Skipping generator_predictions/dense/kernel ['generator_predictions', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator_predictions' Skipping generator_predictions/dense/kernel/adam_m ['generator_predictions', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator_predictions' Skipping generator_predictions/dense/kernel/adam_v ['generator_predictions', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator_predictions' Skipping generator_predictions/output_bias ['generator_lm_head', 'bias'] 'ElectraModel' object has no attribute 'generator_lm_head' Skipping generator_predictions/output_bias/adam_m ['generator_lm_head', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator_lm_head' Skipping generator_predictions/output_bias/adam_v ['generator_lm_head', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator_lm_head' ``` So, my question is, is it the expected behaviour of the Electra model loader or I am doing something wrong? Thanks! <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3671/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3671/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3670
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3670/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3670/comments
https://api.github.com/repos/huggingface/transformers/issues/3670/events
https://github.com/huggingface/transformers/issues/3670
595,644,434
MDU6SXNzdWU1OTU2NDQ0MzQ=
3,670
Has anyone used the run_language_modeling.py to train a gpt2 in a different language? is it possible?
{ "login": "nikkon3", "id": 41228217, "node_id": "MDQ6VXNlcjQxMjI4MjE3", "avatar_url": "https://avatars.githubusercontent.com/u/41228217?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nikkon3", "html_url": "https://github.com/nikkon3", "followers_url": "https://api.github.com/users/nikkon3/followers", "following_url": "https://api.github.com/users/nikkon3/following{/other_user}", "gists_url": "https://api.github.com/users/nikkon3/gists{/gist_id}", "starred_url": "https://api.github.com/users/nikkon3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nikkon3/subscriptions", "organizations_url": "https://api.github.com/users/nikkon3/orgs", "repos_url": "https://api.github.com/users/nikkon3/repos", "events_url": "https://api.github.com/users/nikkon3/events{/privacy}", "received_events_url": "https://api.github.com/users/nikkon3/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi! Maybe you can have a look at to the issue #1560 . ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,591
1,591
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3670/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3670/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3669
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3669/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3669/comments
https://api.github.com/repos/huggingface/transformers/issues/3669/events
https://github.com/huggingface/transformers/pull/3669
595,600,620
MDExOlB1bGxSZXF1ZXN0NDAwMDQ2MjQ5
3,669
[examples] Generate argparsers from type hints on dataclasses
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3669?src=pr&el=h1) Report\n> Merging [#3669](https://codecov.io/gh/huggingface/transformers/pull/3669?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0a9d09b42a9c7c1ccc00da48486a1188078e8594&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `83.54%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3669/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3669?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3669 +/- ##\n==========================================\n+ Coverage 78.03% 78.07% +0.03% \n==========================================\n Files 104 106 +2 \n Lines 17708 17787 +79 \n==========================================\n+ Hits 13819 13887 +68 \n- Misses 3889 3900 +11 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3669?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/hf\\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/3669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `74.00% <74.00%> (ΓΈ)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/3669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.98% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/3669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `100.00% <100.00%> (ΓΈ)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.23% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3669?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3669?src=pr&el=footer). Last update [0a9d09b...b63747d](https://codecov.io/gh/huggingface/transformers/pull/3669?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Phew! Ok, I went through multiple rewrites today and I think it is pretty good now.\r\n\r\n**TL;DR:**\r\n\r\n- I only made changes to the `run_glue.py` example script to keep the diff smaller.\r\n- I have to pass `DataClassType`s (_types_, not instances) to `HfArgumentParser` because if they have required properties/arguments, we wouldn't be able to instantiate them before \"filling\" them\r\n- The class is designed to play well with the native `argparse`. In particular, you can get back any not-known args and parse them using a different argparse.ArgumentParser, to make adoption easier in complex scripts.\r\n- **read the unit tests for (a subset of) the supported arguments and how the properties translate into arguments.** " ]
1,586
1,586
1,586
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3669/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3669", "html_url": "https://github.com/huggingface/transformers/pull/3669", "diff_url": "https://github.com/huggingface/transformers/pull/3669.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3669.patch", "merged_at": 1586535718000 }
https://api.github.com/repos/huggingface/transformers/issues/3668
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3668/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3668/comments
https://api.github.com/repos/huggingface/transformers/issues/3668/events
https://github.com/huggingface/transformers/issues/3668
595,588,166
MDU6SXNzdWU1OTU1ODgxNjY=
3,668
❓ In BART, why forcing the first token to BOS ?
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Colanim,\r\n\r\nThat's indeed a very good question! The only reason why we add these hacks here is because that's the way Fairseq implemented it and you get better results on summarization using Bart this way. We measured the differences in performance when leaving out those \"force token\" hacks and it was quite significant. \r\nPlease read through these PRs to better understand why we made this decision: \r\n\r\nhttps://github.com/huggingface/transformers/pull/3225\r\n\r\nand \r\n\r\nhttps://github.com/huggingface/transformers/pull/3140", "> I don't understand why it's necessary, because anyway the decoder input ids already contain BOS :\r\n\r\nRegarding this point, is the situation different? I went through the PRs and the code, but it seems that the default `decoder_start_token_id` is still the EOS_token.\r\n\r\nUltimately, the question I want to ask is \r\n**If I want to use BART for fine-tuning on another summarization task, do I set the `decoder_start_token_id` to EOS_token or BOS_token?**" ]
1,586
1,635
1,586
CONTRIBUTOR
null
# ❓ Questions & Help In the generation method, the method `prepare_scores_for_generation` is called : https://github.com/huggingface/transformers/blob/0a9d09b42a9c7c1ccc00da48486a1188078e8594/src/transformers/modeling_utils.py#L1208 And in this method, if it's the first decoding step, BOS token is forced : https://github.com/huggingface/transformers/blob/0a9d09b42a9c7c1ccc00da48486a1188078e8594/src/transformers/modeling_bart.py#L924-L926 --- I don't understand why it's necessary, because anyway the decoder input ids already contain BOS : https://github.com/huggingface/transformers/blob/0a9d09b42a9c7c1ccc00da48486a1188078e8594/src/transformers/modeling_utils.py#L866-L868
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3668/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3667
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3667/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3667/comments
https://api.github.com/repos/huggingface/transformers/issues/3667/events
https://github.com/huggingface/transformers/issues/3667
595,586,621
MDU6SXNzdWU1OTU1ODY2MjE=
3,667
Any Ideas on how to generate in bulk with CTRL?
{ "login": "AdaUchendu", "id": 32556160, "node_id": "MDQ6VXNlcjMyNTU2MTYw", "avatar_url": "https://avatars.githubusercontent.com/u/32556160?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AdaUchendu", "html_url": "https://github.com/AdaUchendu", "followers_url": "https://api.github.com/users/AdaUchendu/followers", "following_url": "https://api.github.com/users/AdaUchendu/following{/other_user}", "gists_url": "https://api.github.com/users/AdaUchendu/gists{/gist_id}", "starred_url": "https://api.github.com/users/AdaUchendu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AdaUchendu/subscriptions", "organizations_url": "https://api.github.com/users/AdaUchendu/orgs", "repos_url": "https://api.github.com/users/AdaUchendu/repos", "events_url": "https://api.github.com/users/AdaUchendu/events{/privacy}", "received_events_url": "https://api.github.com/users/AdaUchendu/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "CTRL is a very large model so generating in bulk would require a lot of RAM. \r\nAlso generating with padded batches is not really supported yet, see: https://github.com/huggingface/transformers/issues/3021", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,591
1,591
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> Is it possible to generate multiple articles with a list of prompts using CTRL? Any ideas will be greatly appreciated <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3667/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3667/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3666
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3666/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3666/comments
https://api.github.com/repos/huggingface/transformers/issues/3666/events
https://github.com/huggingface/transformers/pull/3666
595,580,548
MDExOlB1bGxSZXF1ZXN0NDAwMDI5NDYy
3,666
Created README.md for model card ChemBERTa
{ "login": "seyonechithrananda", "id": 46096704, "node_id": "MDQ6VXNlcjQ2MDk2NzA0", "avatar_url": "https://avatars.githubusercontent.com/u/46096704?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seyonechithrananda", "html_url": "https://github.com/seyonechithrananda", "followers_url": "https://api.github.com/users/seyonechithrananda/followers", "following_url": "https://api.github.com/users/seyonechithrananda/following{/other_user}", "gists_url": "https://api.github.com/users/seyonechithrananda/gists{/gist_id}", "starred_url": "https://api.github.com/users/seyonechithrananda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seyonechithrananda/subscriptions", "organizations_url": "https://api.github.com/users/seyonechithrananda/orgs", "repos_url": "https://api.github.com/users/seyonechithrananda/repos", "events_url": "https://api.github.com/users/seyonechithrananda/events{/privacy}", "received_events_url": "https://api.github.com/users/seyonechithrananda/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3666?src=pr&el=h1) Report\n> Merging [#3666](https://codecov.io/gh/huggingface/transformers/pull/3666?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0a9d09b42a9c7c1ccc00da48486a1188078e8594&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3666/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3666?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3666 +/- ##\n==========================================\n+ Coverage 78.03% 78.04% +0.01% \n==========================================\n Files 104 104 \n Lines 17708 17708 \n==========================================\n+ Hits 13819 13821 +2 \n+ Misses 3889 3887 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3666?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.23% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3666?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3666?src=pr&el=footer). Last update [0a9d09b...7df3a3b](https://codecov.io/gh/huggingface/transformers/pull/3666?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This is really cool, @seyonechithrananda. Can you add a \r\n\r\n```\r\n---\r\ntags: \r\n- chemistry\r\n---\r\n```\r\n\r\nmetadata block to the top of the file? Also cc'ing @mrm8488 who might be interested", "Thank you @julien-c. I uploaded two models from ChEMBL25/26 for drug structure learning (SMILES) using same technique. In fact, they have been used for COVID-19 drug discovery", "@mrm8488 Are you targeting ligand-protein modelling techniques with transformers? ", "@julien-c Made the changes. Let me know what you think!", "> @mrm8488 Are you targeting ligand-protein modelling techniques with transformers?\r\n\r\nAs told you via Twitter I am getting started into it. Getting chemical knowledge :)" ]
1,586
1,586
1,586
CONTRIBUTOR
null
The README.md is added to explain an overview of SMILES, as this is the only model card trained on a non-language dataset. The documentation also explains potential use-cases for utilizing RoBERTa trained on masked language modelling for SMILES, and links to a repository with the original notebooks for evaluations, running predictions and some applications of the models for curiosity.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3666/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3666", "html_url": "https://github.com/huggingface/transformers/pull/3666", "diff_url": "https://github.com/huggingface/transformers/pull/3666.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3666.patch", "merged_at": 1586351421000 }
https://api.github.com/repos/huggingface/transformers/issues/3665
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3665/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3665/comments
https://api.github.com/repos/huggingface/transformers/issues/3665/events
https://github.com/huggingface/transformers/pull/3665
595,519,746
MDExOlB1bGxSZXF1ZXN0Mzk5OTgwODYw
3,665
Fix mlm
{ "login": "Santosh-Gupta", "id": 5524261, "node_id": "MDQ6VXNlcjU1MjQyNjE=", "avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Santosh-Gupta", "html_url": "https://github.com/Santosh-Gupta", "followers_url": "https://api.github.com/users/Santosh-Gupta/followers", "following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}", "gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}", "starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions", "organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs", "repos_url": "https://api.github.com/users/Santosh-Gupta/repos", "events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}", "received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Couldn't we achieve the same result by specifying `add_special_tokens=False` instead of that? This isn't robust to different models, as GPT-2 (which doesn't have special tokens) would get some tokens removed.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Closing this as `run_language_modeling.py` is now based on the trainer. Thanks for your contribution!!" ]
1,586
1,594
1,594
CONTRIBUTOR
null
The way the texts is being split up into blocks right now ``` tokenized_text = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(text)) for i in range(0, len(tokenized_text) - block_size + 1, block_size): # Truncate in block of block_size self.examples.append(tokenizer.build_inputs_with_special_tokens(tokenized_text[i : i + block_size])) ``` Results in double [CLS] tokens at the beginning, since special tokens are being added at `tokenizer.convert_tokens_to_ids` And at `tokenizer.build_inputs_with_special_tokens`. Somehow, double [SEP] tokens are not occuring. The following eliminates the double [CLS] token. ``` tokenized_text = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(text)[1:-2]) for i in range(0, len(tokenized_text) - block_size + 1, block_size): # Truncate in block of block_size self.examples.append(tokenizer.build_inputs_with_special_tokens(tokenized_text[i : i + block_size])) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3665/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3665", "html_url": "https://github.com/huggingface/transformers/pull/3665", "diff_url": "https://github.com/huggingface/transformers/pull/3665.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3665.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/3664
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3664/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3664/comments
https://api.github.com/repos/huggingface/transformers/issues/3664/events
https://github.com/huggingface/transformers/issues/3664
595,506,443
MDU6SXNzdWU1OTU1MDY0NDM=
3,664
Unable to serialize/save TF2.0 RobertaSequenceClassification model to saved model format
{ "login": "agupta74", "id": 21690396, "node_id": "MDQ6VXNlcjIxNjkwMzk2", "avatar_url": "https://avatars.githubusercontent.com/u/21690396?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agupta74", "html_url": "https://github.com/agupta74", "followers_url": "https://api.github.com/users/agupta74/followers", "following_url": "https://api.github.com/users/agupta74/following{/other_user}", "gists_url": "https://api.github.com/users/agupta74/gists{/gist_id}", "starred_url": "https://api.github.com/users/agupta74/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agupta74/subscriptions", "organizations_url": "https://api.github.com/users/agupta74/orgs", "repos_url": "https://api.github.com/users/agupta74/repos", "events_url": "https://api.github.com/users/agupta74/events{/privacy}", "received_events_url": "https://api.github.com/users/agupta74/received_events", "type": "User", "site_admin": false }
[ { "id": 1834052129, "node_id": "MDU6TGFiZWwxODM0MDUyMTI5", "url": "https://api.github.com/repos/huggingface/transformers/labels/High-Level%20feature", "name": "High-Level feature", "color": "f7c9a3", "default": false, "description": "" }, { "id": 1834054694, "node_id": "MDU6TGFiZWwxODM0MDU0Njk0", "url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow", "name": "TensorFlow", "color": "FF6F00", "default": false, "description": "Anything TensorFlow" }, { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." }, { "id": 1862634478, "node_id": "MDU6TGFiZWwxODYyNjM0NDc4", "url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix", "name": "Should Fix", "color": "FF0000", "default": false, "description": "This has been identified as a bug and should be fixed." } ]
closed
false
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[ { "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false } ]
[ "do you solve it ?", "> do you solve it ?\r\n\r\nNot yet. Let me know if you are able to find the fix.", "yes, I have the same issue. I inspected the tensorboard graph and there is no config operation or anything of that sort. I also tried to save it with a manually defined signature, which didn't work either.\r\n\r\nWorkaround for now is to use the `save_pretrained`. Is there a way to convert pretrained to TF2.0 saved_model?", "We also see a similar issue in transformers 2.9.1.\r\n\r\nAlso curious if people have a workaround or solution to use with TF model serving?", "FYI: The cause for this issue is documented in #4709. The current workaround/fix is to remove `config` from the call to this function:\r\n\r\nhttps://github.com/huggingface/transformers/blob/d6a677b14bcfd56b22fafeb212a27c6068886e07/src/transformers/modeling_tf_roberta.py#L331 \r\n\r\nThis prevents `trainable` from being set to `config` in the initialization function of `tf.keras.layers.Layer`. Then the model will be serialized correctly instead of failing to serialize the `trainable` value later.", "Hello!\r\n\r\nThe saving in saved model format is not implemented yet, but it is planned to work on it :) I will reply here once there will be something about this. Sorry for the inconvenience.", "I think this issue can be closed now due to PR #4884.\r\n\r\nThe sample code in the issue runs successfully in `master`.\r\n\r\n<img width=\"1010\" alt=\"Screenshot 2020-06-10 at 12 09 12\" src=\"https://user-images.githubusercontent.com/5602332/84255742-5363d000-ab13-11ea-822c-72da88399995.png\">\r\n", "Great, thanks for solving the issue @harkous ", "@harkous Thanks for the work.\r\nI have still the issue of the author even with your code and upgrade the last version of transformer with pip install transformers --upgrade , is it still working with you ?", "You have to install transformers from the master branch. The fix has not been released yet.", "Hello ! \r\n\r\nIt seems that I have a similar issue with a model based on Camembert when trying to save my model with : \r\n\r\n`model.save(\"model\",save_format='tf')`\r\n\r\nGive me : \r\n\r\n```\r\nTypeError: ('Not JSON Serializable:', CamembertConfig {\r\n \"architectures\": [\r\n \"CamembertForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"bos_token_id\": 5,\r\n \"eos_token_id\": 6,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-05,\r\n \"max_position_embeddings\": 514,\r\n \"model_type\": \"camembert\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"output_past\": true,\r\n \"pad_token_id\": 1,\r\n \"type_vocab_size\": 1,\r\n \"vocab_size\": 32005\r\n}\r\n)\r\n```\r\n\r\nAt first with transformers 2.11.0 but also after upgrading to 3.3 (with TF 2.3)\r\n\r\nI can give a code snippet to reproduce if necessary and Custom Model construction can be found here : https://github.com/MAIF/melusine/blob/master/melusine/models/neural_architectures.py#L312\r\n\r\n\r\n**Complete Stack Trace**\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-32-ae508742561b> in <module>\r\n----> 1 model.model.save(\"test4\",save_format='tf')\r\n\r\n~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in save(self, filepath, overwrite, include_optimizer, save_format, signatures, options)\r\n 1977 \"\"\"\r\n 1978 save.save_model(self, filepath, overwrite, include_optimizer, save_format,\r\n-> 1979 signatures, options)\r\n 1980 \r\n 1981 def save_weights(self,\r\n\r\n~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options)\r\n 132 else:\r\n 133 saved_model_save.save(model, filepath, overwrite, include_optimizer,\r\n--> 134 signatures, options)\r\n 135 \r\n 136 \r\n\r\n~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/save.py in save(model, filepath, overwrite, include_optimizer, signatures, options)\r\n 78 # we use the default replica context here.\r\n 79 with distribution_strategy_context._get_default_replica_context(): # pylint: disable=protected-access\r\n---> 80 save_lib.save(model, filepath, signatures, options)\r\n 81 \r\n 82 if not include_optimizer:\r\n\r\n~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in save(obj, export_dir, signatures, options)\r\n 974 \r\n 975 _, exported_graph, object_saver, asset_info = _build_meta_graph(\r\n--> 976 obj, export_dir, signatures, options, meta_graph_def)\r\n 977 saved_model.saved_model_schema_version = constants.SAVED_MODEL_SCHEMA_VERSION\r\n 978 \r\n\r\n~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in _build_meta_graph(obj, export_dir, signatures, options, meta_graph_def)\r\n 1074 \r\n 1075 object_graph_proto = _serialize_object_graph(saveable_view,\r\n-> 1076 asset_info.asset_index)\r\n 1077 meta_graph_def.object_graph_def.CopyFrom(object_graph_proto)\r\n 1078 \r\n\r\n~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in _serialize_object_graph(saveable_view, asset_file_def_index)\r\n 719 for obj, obj_proto in zip(saveable_view.nodes, proto.nodes):\r\n 720 _write_object_proto(obj, obj_proto, asset_file_def_index,\r\n--> 721 saveable_view.function_name_map)\r\n 722 return proto\r\n 723 \r\n\r\n~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in _write_object_proto(obj, proto, asset_file_def_index, function_name_map)\r\n 759 version=versions_pb2.VersionDef(\r\n 760 producer=1, min_consumer=1, bad_consumers=[]),\r\n--> 761 metadata=obj._tracking_metadata)\r\n 762 # pylint:enable=protected-access\r\n 763 proto.user_object.CopyFrom(registered_type_proto)\r\n\r\n~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py in _tracking_metadata(self)\r\n 3009 @property\r\n 3010 def _tracking_metadata(self):\r\n-> 3011 return self._trackable_saved_model_saver.tracking_metadata\r\n 3012 \r\n 3013 def _list_extra_dependencies_for_serialization(self, serialization_cache):\r\n\r\n~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/base_serialization.py in tracking_metadata(self)\r\n 52 # TODO(kathywu): check that serialized JSON can be loaded (e.g., if an\r\n 53 # object is in the python property)\r\n---> 54 return json_utils.Encoder().encode(self.python_properties)\r\n 55 \r\n 56 def list_extra_dependencies_for_serialization(self, serialization_cache):\r\n\r\n~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/json_utils.py in encode(self, obj)\r\n 42 \r\n 43 def encode(self, obj):\r\n---> 44 return super(Encoder, self).encode(_encode_tuple(obj))\r\n 45 \r\n 46 \r\n\r\n~/.conda/envs/emails_maif_vie/lib/python3.6/json/encoder.py in encode(self, o)\r\n 197 # exceptions aren't as detailed. The list call should be roughly\r\n 198 # equivalent to the PySequence_Fast that ''.join() would do.\r\n--> 199 chunks = self.iterencode(o, _one_shot=True)\r\n 200 if not isinstance(chunks, (list, tuple)):\r\n 201 chunks = list(chunks)\r\n\r\n~/.conda/envs/emails_maif_vie/lib/python3.6/json/encoder.py in iterencode(self, o, _one_shot)\r\n 255 self.key_separator, self.item_separator, self.sort_keys,\r\n 256 self.skipkeys, _one_shot)\r\n--> 257 return _iterencode(o, 0)\r\n 258 \r\n 259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,\r\n\r\n~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/json_utils.py in default(self, obj)\r\n 39 items = obj.as_list() if obj.rank is not None else None\r\n 40 return {'class_name': 'TensorShape', 'items': items}\r\n---> 41 return serialization.get_json_type(obj)\r\n 42 \r\n 43 def encode(self, obj):\r\n\r\n~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/util/serialization.py in get_json_type(obj)\r\n 70 return obj.__wrapped__\r\n 71 \r\n---> 72 raise TypeError('Not JSON Serializable:', obj)\r\n\r\nTypeError: ('Not JSON Serializable:', CamembertConfig {\r\n \"architectures\": [\r\n \"CamembertForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"bos_token_id\": 5,\r\n \"eos_token_id\": 6,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-05,\r\n \"max_position_embeddings\": 514,\r\n \"model_type\": \"camembert\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"output_past\": true,\r\n \"pad_token_id\": 1,\r\n \"type_vocab_size\": 1,\r\n \"vocab_size\": 32005\r\n}\r\n)\r\n```", "Please open another issue with a code snippet to make us able to reproduce your problem." ]
1,586
1,602
1,591
NONE
null
# πŸ› Bug I am getting an error while trying to serialize/save TF2.0 RobertaSequenceClassification Keras model to saved model format. I do not see this issue with Bert or Albert model architecture. Please see below for my test script that can be used to reproduce this issue. ## Information Model I am using (Bert, XLNet ...): Roberta Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) ```python import tensorflow as tf from transformers import * tokenizer = RobertaTokenizer.from_pretrained('roberta-base') model = TFRobertaForSequenceClassification.from_pretrained('roberta-base') ##########Uncomment the following 2 lines for testing with BERT ############ #tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') #model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased') input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :] outputs = model(input_ids) logits = outputs[0] tf_saved_model_path= "/tmp/saved_model/" tf.keras.models.save_model(model, tf_saved_model_path, overwrite=True, include_optimizer=False, save_format='tf') ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) I need to export/serialize a TF Keras model to TF saved model format ## To reproduce Steps to reproduce the behavior: 1. Run the script pasted above to reproduce the issue with Roberta 2. Uncomment the 2 lines as mentioned in the script for using Bert (no error seen with Bert) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ***Stack Trace for Roberta*** --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-5-87e63ee0b3ac> in <module> 15 16 tf_saved_model_path= "/tmp/saved_model/" ---> 17 tf.keras.models.save_model(model, tf_saved_model_path, overwrite=True, include_optimizer=False, save_format='tf') ~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options) 136 else: 137 saved_model_save.save(model, filepath, overwrite, include_optimizer, --> 138 signatures, options) 139 140 ~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/save.py in save(model, filepath, overwrite, include_optimizer, signatures, options) 76 # we use the default replica context here. 77 with distribution_strategy_context._get_default_replica_context(): # pylint: disable=protected-access ---> 78 save_lib.save(model, filepath, signatures, options) 79 80 if not include_optimizer: ~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in save(obj, export_dir, signatures, options) 949 950 _, exported_graph, object_saver, asset_info = _build_meta_graph( --> 951 obj, export_dir, signatures, options, meta_graph_def) 952 saved_model.saved_model_schema_version = constants.SAVED_MODEL_SCHEMA_VERSION 953 ~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in _build_meta_graph(obj, export_dir, signatures, options, meta_graph_def) 1035 1036 object_graph_proto = _serialize_object_graph(saveable_view, -> 1037 asset_info.asset_index) 1038 meta_graph_def.object_graph_def.CopyFrom(object_graph_proto) 1039 ~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in _serialize_object_graph(saveable_view, asset_file_def_index) 695 for obj, obj_proto in zip(saveable_view.nodes, proto.nodes): 696 _write_object_proto(obj, obj_proto, asset_file_def_index, --> 697 saveable_view.function_name_map) 698 return proto 699 ~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in _write_object_proto(obj, proto, asset_file_def_index, function_name_map) 735 version=versions_pb2.VersionDef( 736 producer=1, min_consumer=1, bad_consumers=[]), --> 737 metadata=obj._tracking_metadata) 738 # pylint:enable=protected-access 739 proto.user_object.CopyFrom(registered_type_proto) ~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py in _tracking_metadata(self) 2727 @property 2728 def _tracking_metadata(self): -> 2729 return self._trackable_saved_model_saver.tracking_metadata 2730 2731 def _list_extra_dependencies_for_serialization(self, serialization_cache): ~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/base_serialization.py in tracking_metadata(self) 52 # TODO(kathywu): check that serialized JSON can be loaded (e.g., if an 53 # object is in the python property) ---> 54 return json_utils.Encoder().encode(self.python_properties) 55 56 def list_extra_dependencies_for_serialization(self, serialization_cache): ~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/json_utils.py in encode(self, obj) 42 43 def encode(self, obj): ---> 44 return super(Encoder, self).encode(_encode_tuple(obj)) 45 46 /usr/local/opt/pyenv/versions/3.6.7/lib/python3.6/json/encoder.py in encode(self, o) 197 # exceptions aren't as detailed. The list call should be roughly 198 # equivalent to the PySequence_Fast that ''.join() would do. --> 199 chunks = self.iterencode(o, _one_shot=True) 200 if not isinstance(chunks, (list, tuple)): 201 chunks = list(chunks) /usr/local/opt/pyenv/versions/3.6.7/lib/python3.6/json/encoder.py in iterencode(self, o, _one_shot) 255 self.key_separator, self.item_separator, self.sort_keys, 256 self.skipkeys, _one_shot) --> 257 return _iterencode(o, 0) 258 259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, ~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/json_utils.py in default(self, obj) 39 items = obj.as_list() if obj.rank is not None else None 40 return {'class_name': 'TensorShape', 'items': items} ---> 41 return serialization.get_json_type(obj) 42 43 def encode(self, obj): ~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/util/serialization.py in get_json_type(obj) 74 return obj.__wrapped__ 75 ---> 76 raise TypeError('Not JSON Serializable:', obj) TypeError: ('Not JSON Serializable:', RobertaConfig { "_num_labels": 2, "architectures": [ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bad_words_ids": null, "bos_token_id": 0, "decoder_start_token_id": null, "do_sample": false, "early_stopping": false, "eos_token_id": 2, "finetuning_task": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "initializer_range": 0.02, "intermediate_size": 3072, "is_decoder": false, "is_encoder_decoder": false, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "layer_norm_eps": 1e-05, "length_penalty": 1.0, "max_length": 20, "max_position_embeddings": 514, "min_length": 0, "model_type": "roberta", "no_repeat_ngram_size": 0, "num_attention_heads": 12, "num_beams": 1, "num_hidden_layers": 12, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": false, "output_past": true, "pad_token_id": 1, "prefix": null, "pruned_heads": {}, "repetition_penalty": 1.0, "task_specific_params": null, "temperature": 1.0, "top_k": 50, "top_p": 1.0, "torchscript": false, "type_vocab_size": 1, "use_bfloat16": false, "vocab_size": 50265 } ) ## Expected behavior There should be no error when saving/serializing the TF Keras Model for Roberta. I do not see any error with Bert or Albert. <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.7.0 - Platform: Darwin-19.2.0-x86_64-i386-64bit - Python version: 3.6.7 - PyTorch version (GPU?): 1.4.0 (False) - Tensorflow version (GPU?): 2.2.0-rc1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No I also see the same issue with TF 2.1.0.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3664/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3664/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3663
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3663/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3663/comments
https://api.github.com/repos/huggingface/transformers/issues/3663/events
https://github.com/huggingface/transformers/pull/3663
595,442,554
MDExOlB1bGxSZXF1ZXN0Mzk5OTE3ODQ0
3,663
Speedup torch summarization tests
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Non slow test speed before change:\r\n![image](https://user-images.githubusercontent.com/6045025/78608104-459d8000-782e-11ea-9a30-a16ea20e3eec.png)\r\n", "There is a tiny TF T5 model now as well via: \r\n`model = TFAutoModelWithLMHead.from_pretrained(\"patrickvonplaten/t5-tiny-random\")`", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3663?src=pr&el=h1) Report\n> Merging [#3663](https://codecov.io/gh/huggingface/transformers/pull/3663?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0a9d09b42a9c7c1ccc00da48486a1188078e8594&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3663/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3663?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3663 +/- ##\n==========================================\n- Coverage 78.03% 78.02% -0.02% \n==========================================\n Files 104 104 \n Lines 17708 17708 \n==========================================\n- Hits 13819 13817 -2 \n- Misses 3889 3891 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3663?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3663/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.96% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3663/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.97% <0.00%> (-0.13%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3663?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3663?src=pr&el=footer). Last update [0a9d09b...4751188](https://codecov.io/gh/huggingface/transformers/pull/3663?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "LGTM, but FYI for context, I think the reason we were using the real models was that we intended to do integration testing: in `_test_mono_column_pipeline` we only test equality of keys but we could have tested equality (or closeness) of values.", "Makes sense @julien-c . I'd be happy to add some `@slow` integration tests and try to fulfill the original intent" ]
1,586
1,586
1,586
CONTRIBUTOR
null
Speedup torch summarization tests by using small models that are faster to download and instantiate.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3663/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3663/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3663", "html_url": "https://github.com/huggingface/transformers/pull/3663", "diff_url": "https://github.com/huggingface/transformers/pull/3663.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3663.patch", "merged_at": 1586282491000 }
https://api.github.com/repos/huggingface/transformers/issues/3662
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3662/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3662/comments
https://api.github.com/repos/huggingface/transformers/issues/3662/events
https://github.com/huggingface/transformers/pull/3662
595,364,511
MDExOlB1bGxSZXF1ZXN0Mzk5ODUzMzIy
3,662
Create model card for NLP4H/ms_bert
{ "login": "MichalMalyska", "id": 12971408, "node_id": "MDQ6VXNlcjEyOTcxNDA4", "avatar_url": "https://avatars.githubusercontent.com/u/12971408?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MichalMalyska", "html_url": "https://github.com/MichalMalyska", "followers_url": "https://api.github.com/users/MichalMalyska/followers", "following_url": "https://api.github.com/users/MichalMalyska/following{/other_user}", "gists_url": "https://api.github.com/users/MichalMalyska/gists{/gist_id}", "starred_url": "https://api.github.com/users/MichalMalyska/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichalMalyska/subscriptions", "organizations_url": "https://api.github.com/users/MichalMalyska/orgs", "repos_url": "https://api.github.com/users/MichalMalyska/repos", "events_url": "https://api.github.com/users/MichalMalyska/events{/privacy}", "received_events_url": "https://api.github.com/users/MichalMalyska/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3662?src=pr&el=h1) Report\n> Merging [#3662](https://codecov.io/gh/huggingface/transformers/pull/3662?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/261c4ff4e297e919ba993e1214a805e988bc9e79&el=desc) will **decrease** coverage by `0.04%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3662/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3662?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3662 +/- ##\n==========================================\n- Coverage 78.29% 78.25% -0.05% \n==========================================\n Files 104 104 \n Lines 17628 17628 \n==========================================\n- Hits 13802 13794 -8 \n- Misses 3826 3834 +8 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3662?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3662/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.65% <0.00%> (-0.84%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3662/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.12% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3662/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.10% <0.00%> (-0.13%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3662?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3662?src=pr&el=footer). Last update [261c4ff...fccd0c7](https://codecov.io/gh/huggingface/transformers/pull/3662?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "@MichalMalyska This is super interesting, thanks for sharing\r\n\r\n[**Model page**](https://huggingface.co/NLP4H/ms_bert)" ]
1,586
1,586
1,586
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3662/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3662", "html_url": "https://github.com/huggingface/transformers/pull/3662", "diff_url": "https://github.com/huggingface/transformers/pull/3662.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3662.patch", "merged_at": 1586204871000 }
https://api.github.com/repos/huggingface/transformers/issues/3661
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3661/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3661/comments
https://api.github.com/repos/huggingface/transformers/issues/3661/events
https://github.com/huggingface/transformers/pull/3661
595,314,219
MDExOlB1bGxSZXF1ZXN0Mzk5ODEwOTY4
3,661
fixed TransfoXLLMHeadModel documentation
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,586
1,586
1,586
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3661/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3661", "html_url": "https://github.com/huggingface/transformers/pull/3661", "diff_url": "https://github.com/huggingface/transformers/pull/3661.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3661.patch", "merged_at": 1586213272000 }
https://api.github.com/repos/huggingface/transformers/issues/3660
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3660/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3660/comments
https://api.github.com/repos/huggingface/transformers/issues/3660/events
https://github.com/huggingface/transformers/issues/3660
595,267,614
MDU6SXNzdWU1OTUyNjc2MTQ=
3,660
Exception: process 0 terminated with signal SIGKILL
{ "login": "mobassir94", "id": 24439592, "node_id": "MDQ6VXNlcjI0NDM5NTky", "avatar_url": "https://avatars.githubusercontent.com/u/24439592?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mobassir94", "html_url": "https://github.com/mobassir94", "followers_url": "https://api.github.com/users/mobassir94/followers", "following_url": "https://api.github.com/users/mobassir94/following{/other_user}", "gists_url": "https://api.github.com/users/mobassir94/gists{/gist_id}", "starred_url": "https://api.github.com/users/mobassir94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mobassir94/subscriptions", "organizations_url": "https://api.github.com/users/mobassir94/orgs", "repos_url": "https://api.github.com/users/mobassir94/repos", "events_url": "https://api.github.com/users/mobassir94/events{/privacy}", "received_events_url": "https://api.github.com/users/mobassir94/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Did you find the solution to the problem?", "@jhashekhar it seems like pytorch xla has some memory issue itself, xla team is working on it, tf tpu is much better at this moment so i am not using pytorch tpu anymore,probably later this year torch xla team will solve all the performance issues they are having at this moment,until then i recommend tf tpu or if you need gpu then pytorch gpu.just my recommendation", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Had this come up when parallel training on GPUS with multiprocessing - thoughts on a solution?", "Take a look at this issue: https://github.com/pytorch/xla/issues/1870#issuecomment-612217012\r\n\r\nIt's pretty long, but they helped me solve this problem last year. I got the model working, but ended up using TF.", "> pytorch/xla#1870 (comment)\r\n\r\ndo you have a summary of what we need to do for solving this for pytorch?", "@brando90 please use bf16 and follow this simple tutorial of mine for better understanding : https://www.kaggle.com/mobassir/faster-pytorch-tpu-baseline-for-cld-cv-0-9\r\n\r\nDon't forget to reduce batch size,image size etc that fits in xla\r\n\r\nI think it will help you to solve your oom error,thanks", "> Had this come up when parallel training on GPUS with multiprocessing - thoughts on a solution?\r\n\r\nI got this problem. Did you solve it?" ]
1,586
1,629
1,596
NONE
null
# ❓ Questions & Help i was using this notebook : https://www.kaggle.com/theoviel/bert-pytorch-huggingface-with-tpu-multiprocessing to finetune huggingface’s xlm roberta base model on jigsaw multilingual (ongoing kaggle competition) this is my first time with torch xla and TPU multiprocessing…! the code i am trying is exactly this one : https://pastebin.com/fS94MKYc on a kaggle kernel which gives TPU v3-8 but even for batch_size = 8 i see my jupyter notebook crashes after giving this error message : **Your notebook tried to allocate more memory than is available. It has restarted.** where i can see other people are using same model with even batch_size = 64 full error message looks like this : ``` --------------------------------------------------------------------------- Exception Traceback (most recent call last) <timed exec> in <module> /opt/conda/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py in spawn(fn, args, nprocs, join, daemon, start_method) 180 join=join, 181 daemon=daemon, --> 182 start_method=start_method) /opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py in start_processes(fn, args, nprocs, join, daemon, start_method) 156 157 # Loop on join until it returns True or raises an exception. --> 158 while not context.join(): 159 pass 160 /opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py in join(self, timeout) 106 raise Exception( 107 "process %d terminated with signal %s" % --> 108 (error_index, name) 109 ) 110 else: Exception: process 0 terminated with signal SIGKILL ``` same problem is occuring also when i try bert base multilingual of huggingface. so i am not understanding exactly where in my code i need to make change so that it can work? it seems like the problem is not with the batch size but something else that i am unable to catch.please help,thanks in advance
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3660/reactions", "total_count": 10, "+1": 10, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3660/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3659
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3659/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3659/comments
https://api.github.com/repos/huggingface/transformers/issues/3659/events
https://github.com/huggingface/transformers/pull/3659
595,256,650
MDExOlB1bGxSZXF1ZXN0Mzk5NzYzMTYx
3,659
Add model card
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3659?src=pr&el=h1) Report\n> Merging [#3659](https://codecov.io/gh/huggingface/transformers/pull/3659?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/39a34cc375ed79d18888464289b83713fc20f7d4&el=desc) will **not change** coverage by `%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3659/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3659?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3659 +/- ##\n=======================================\n Coverage 78.29% 78.29% \n=======================================\n Files 104 104 \n Lines 17628 17628 \n=======================================\n Hits 13801 13801 \n Misses 3827 3827 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3659?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (ΓΈ)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3659?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3659?src=pr&el=footer). Last update [39a34cc...8d1de79](https://codecov.io/gh/huggingface/transformers/pull/3659?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,586
1,586
1,586
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3659/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3659/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3659", "html_url": "https://github.com/huggingface/transformers/pull/3659", "diff_url": "https://github.com/huggingface/transformers/pull/3659.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3659.patch", "merged_at": 1586205003000 }
https://api.github.com/repos/huggingface/transformers/issues/3658
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3658/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3658/comments
https://api.github.com/repos/huggingface/transformers/issues/3658/events
https://github.com/huggingface/transformers/pull/3658
595,251,960
MDExOlB1bGxSZXF1ZXN0Mzk5NzU5MzEx
3,658
Add model card
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3658?src=pr&el=h1) Report\n> Merging [#3658](https://codecov.io/gh/huggingface/transformers/pull/3658?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/39a34cc375ed79d18888464289b83713fc20f7d4&el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3658/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3658?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3658 +/- ##\n==========================================\n- Coverage 78.29% 78.23% -0.06% \n==========================================\n Files 104 104 \n Lines 17628 17628 \n==========================================\n- Hits 13801 13791 -10 \n- Misses 3827 3837 +10 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3658?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3658/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.65% <0.00%> (-0.84%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3658/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.63% <0.00%> (-0.66%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3658?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3658?src=pr&el=footer). Last update [39a34cc...f390cb4](https://codecov.io/gh/huggingface/transformers/pull/3658?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,586
1,586
1,586
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3658/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3658/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3658", "html_url": "https://github.com/huggingface/transformers/pull/3658", "diff_url": "https://github.com/huggingface/transformers/pull/3658.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3658.patch", "merged_at": 1586204992000 }
https://api.github.com/repos/huggingface/transformers/issues/3657
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3657/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3657/comments
https://api.github.com/repos/huggingface/transformers/issues/3657/events
https://github.com/huggingface/transformers/issues/3657
595,251,321
MDU6SXNzdWU1OTUyNTEzMjE=
3,657
Weird summarization results - the summary is longer than the input
{ "login": "metahgva", "id": 9355520, "node_id": "MDQ6VXNlcjkzNTU1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/9355520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/metahgva", "html_url": "https://github.com/metahgva", "followers_url": "https://api.github.com/users/metahgva/followers", "following_url": "https://api.github.com/users/metahgva/following{/other_user}", "gists_url": "https://api.github.com/users/metahgva/gists{/gist_id}", "starred_url": "https://api.github.com/users/metahgva/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/metahgva/subscriptions", "organizations_url": "https://api.github.com/users/metahgva/orgs", "repos_url": "https://api.github.com/users/metahgva/repos", "events_url": "https://api.github.com/users/metahgva/events{/privacy}", "received_events_url": "https://api.github.com/users/metahgva/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "You can pass `summarizer(data, min_length=10, max_length=20)` to get a summary whose length is between 10 and 20 tokens. By default, summaries will be between 56 and 142 tokens. ", "Thanks @sshleifer, interestingly now by having a max_length the summary is just arbitrarily cut, which is not great either. Is there a way to constrain the summary length and actually preserve the sense? \r\n\r\n> [{'summary_text': '\"We have a telephony partner who is very interested in this program and may be'}]", "The logic of the program is \"generate the most likely summary\" of between `min_length` and `max_length`. So it's not programmed to cut the summary in a rules based way.\r\n\r\nWith that in mind, I've also seen poor results summarizing documents that are very different than the finetuning distribution (news articles of ~1024 tokens).\r\n\r\nYou *might* get better results with `summarizer = pipeline(task=\"summarization\", model='bart-large-xsum')` .", "> The logic of the program is \"generate the most likely summary\" of between min_length and max_length. So it's not programmed to cut the summary in a rules based way.\r\nThanks for confirming - seems to be the right approach :)! \r\n\r\n> You might get better results with summarizer = pipeline(task=\"summarization\", model='bart-large-xsum') .\r\nOk, will give it a try then! \r\n\r\n> With that in mind, I've also seen poor results summarizing documents that are very different than the finetuning distribution (news articles of ~1024 tokens).\r\nSo you want to keep it open as a bug or should we close? \r\n\r\nAs a side request, it would be awesome to have metrics associated with each models that are part of transformers to help users choose the right one for their job (cc: @julien-c ).\r\n", "Hi @sshleifer Can we increase token length beyond 1024 for generating a summary.\r\nI got the following message while generating a summary of the 20000-word document.\r\n\r\n`Your max_length is set to 1300, but you input_length is only 1024. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)`\r\n", "Unfortunately, Bart can only process 1024 tokens at once, so your best best would be to split your doc into chunks, summarize each one, and concatenate the summaries.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,594
1,594
NONE
null
# πŸ› Bug ## Information Summarization task is returning an unexpected results. For an input of > "We have a telephony partner who is very interested in this program and may be able to help identify pilot customers." The results is > [{'summary_text': '"We have a telephony partner who is very interested in this program and may be able to help identify pilot customers," the company says. "We are looking at a number of different ways to get people talking to each other," it adds. "It\'s a very exciting time for us," says the company\'s chief operating officer.'}] Model I am using (Bert, XLNet ...): Summarization pipeline Language I am using the model on (English, Chinese ...): Eng The problem arises when using: * [ ] the official example scripts: (give details below) * [V ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [V ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Execute below script ```python !pip install -q transformers --upgrade from transformers import pipeline summarizer = pipeline(task="summarization") data = "We have a telephony partner who is very interested in this program and may be able to help identify pilot customers." print(summarizer(data)) ``` ## Expected behavior Would expect the summary to 1) not add contextual information that doesn't exist, and 2) to not be longer than the input. Arguably the input is short but still... ## Environment info Colab
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3657/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3657/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3656
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3656/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3656/comments
https://api.github.com/repos/huggingface/transformers/issues/3656/events
https://github.com/huggingface/transformers/pull/3656
595,243,393
MDExOlB1bGxSZXF1ZXN0Mzk5NzUyMjcz
3,656
Add model card
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,586
1,586
1,586
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3656/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3656/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3656", "html_url": "https://github.com/huggingface/transformers/pull/3656", "diff_url": "https://github.com/huggingface/transformers/pull/3656.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3656.patch", "merged_at": 1586204985000 }
https://api.github.com/repos/huggingface/transformers/issues/3655
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3655/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3655/comments
https://api.github.com/repos/huggingface/transformers/issues/3655/events
https://github.com/huggingface/transformers/pull/3655
595,241,053
MDExOlB1bGxSZXF1ZXN0Mzk5NzUwMzM0
3,655
Add model card
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3655?src=pr&el=h1) Report\n> Merging [#3655](https://codecov.io/gh/huggingface/transformers/pull/3655?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/39a34cc375ed79d18888464289b83713fc20f7d4&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3655/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3655?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3655 +/- ##\n=======================================\n Coverage 78.29% 78.29% \n=======================================\n Files 104 104 \n Lines 17628 17628 \n=======================================\n+ Hits 13801 13802 +1 \n+ Misses 3827 3826 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3655?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3655/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (ΓΈ)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3655/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.23% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3655?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3655?src=pr&el=footer). Last update [39a34cc...0ee8e8c](https://codecov.io/gh/huggingface/transformers/pull/3655?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,586
1,586
1,586
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3655/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3655", "html_url": "https://github.com/huggingface/transformers/pull/3655", "diff_url": "https://github.com/huggingface/transformers/pull/3655.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3655.patch", "merged_at": 1586204976000 }
https://api.github.com/repos/huggingface/transformers/issues/3654
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3654/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3654/comments
https://api.github.com/repos/huggingface/transformers/issues/3654/events
https://github.com/huggingface/transformers/pull/3654
595,233,117
MDExOlB1bGxSZXF1ZXN0Mzk5NzQzODY0
3,654
Create model card
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,586
1,586
1,586
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3654/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3654/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3654", "html_url": "https://github.com/huggingface/transformers/pull/3654", "diff_url": "https://github.com/huggingface/transformers/pull/3654.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3654.patch", "merged_at": 1586204966000 }
https://api.github.com/repos/huggingface/transformers/issues/3653
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3653/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3653/comments
https://api.github.com/repos/huggingface/transformers/issues/3653/events
https://github.com/huggingface/transformers/pull/3653
595,223,833
MDExOlB1bGxSZXF1ZXN0Mzk5NzM2MTQx
3,653
Create model card
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,586
1,586
1,586
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3653/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3653", "html_url": "https://github.com/huggingface/transformers/pull/3653", "diff_url": "https://github.com/huggingface/transformers/pull/3653.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3653.patch", "merged_at": 1586204943000 }
https://api.github.com/repos/huggingface/transformers/issues/3652
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3652/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3652/comments
https://api.github.com/repos/huggingface/transformers/issues/3652/events
https://github.com/huggingface/transformers/pull/3652
595,142,738
MDExOlB1bGxSZXF1ZXN0Mzk5NjY4NTA1
3,652
Create README.md for ktrapeznikov/biobert_v1.1_pubmed_squad_v2
{ "login": "ktrapeznikov", "id": 4052002, "node_id": "MDQ6VXNlcjQwNTIwMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/4052002?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ktrapeznikov", "html_url": "https://github.com/ktrapeznikov", "followers_url": "https://api.github.com/users/ktrapeznikov/followers", "following_url": "https://api.github.com/users/ktrapeznikov/following{/other_user}", "gists_url": "https://api.github.com/users/ktrapeznikov/gists{/gist_id}", "starred_url": "https://api.github.com/users/ktrapeznikov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ktrapeznikov/subscriptions", "organizations_url": "https://api.github.com/users/ktrapeznikov/orgs", "repos_url": "https://api.github.com/users/ktrapeznikov/repos", "events_url": "https://api.github.com/users/ktrapeznikov/events{/privacy}", "received_events_url": "https://api.github.com/users/ktrapeznikov/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "[**Model page**](https://huggingface.co/ktrapeznikov/biobert_v1.1_pubmed_squad_v2)" ]
1,586
1,586
1,586
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3652/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3652/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3652", "html_url": "https://github.com/huggingface/transformers/pull/3652", "diff_url": "https://github.com/huggingface/transformers/pull/3652.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3652.patch", "merged_at": 1586205330000 }
https://api.github.com/repos/huggingface/transformers/issues/3651
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3651/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3651/comments
https://api.github.com/repos/huggingface/transformers/issues/3651/events
https://github.com/huggingface/transformers/issues/3651
595,142,240
MDU6SXNzdWU1OTUxNDIyNDA=
3,651
❓Adding new tokens to pre-trained tokenizer
{ "login": "yash1994", "id": 13917659, "node_id": "MDQ6VXNlcjEzOTE3NjU5", "avatar_url": "https://avatars.githubusercontent.com/u/13917659?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yash1994", "html_url": "https://github.com/yash1994", "followers_url": "https://api.github.com/users/yash1994/followers", "following_url": "https://api.github.com/users/yash1994/following{/other_user}", "gists_url": "https://api.github.com/users/yash1994/gists{/gist_id}", "starred_url": "https://api.github.com/users/yash1994/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yash1994/subscriptions", "organizations_url": "https://api.github.com/users/yash1994/orgs", "repos_url": "https://api.github.com/users/yash1994/repos", "events_url": "https://api.github.com/users/yash1994/events{/privacy}", "received_events_url": "https://api.github.com/users/yash1994/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "docs are pretty nice imho;", "If I had found what I looking for in the documentation then why would I open an issue and waste someone else's time? I know other approaches to this problem but one seemed to be more time saving so just checking for implementation available. But now it seems that people are more concerned about critical bugs only in the issue section. Closing the issue for good.", "There is no way to dynamically add unknown tokens to the vocabulary. The simplest way to do it would be to encode the sequence, detect unknowns, and then add these to the vocabulary, which seems to be what you did!\r\n\r\nPlease be aware that you will have to resize the model's embedding matrix according to the tokens you've added.", "Or you can map all such tokens a group of them maybe to an OOV kinda token as well. ", "So the only way to update tokenizer is to get all unknowns first then resize model embedding matrix. Thanks @LysandreJik and @AdityaSoni19031997 " ]
1,586
1,586
1,586
NONE
null
## Details Hi, I am working with DistilBERT multilingual model for sequence classification tasks where I need to add some additional languages apart from mentioned [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages). And for that, I am struggling to find the correct way to update tokenizer. From the documentation, I inferred that first, i have to get all new tokens in a list, call `tokenizer.add_tokens()` and then again i have to pass those new sentences to tokenizer and get them tokenized. So the real question: is there any method which i use to update tokenizer and tokenize sentence at the same time (when tokenizer sees unknown token it adds the token to the dictionary). Thanks in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3651/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3651/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3650
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3650/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3650/comments
https://api.github.com/repos/huggingface/transformers/issues/3650/events
https://github.com/huggingface/transformers/issues/3650
595,100,909
MDU6SXNzdWU1OTUxMDA5MDk=
3,650
How can I judge whether is in the dictionary?
{ "login": "smelly-dog", "id": 32981640, "node_id": "MDQ6VXNlcjMyOTgxNjQw", "avatar_url": "https://avatars.githubusercontent.com/u/32981640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/smelly-dog", "html_url": "https://github.com/smelly-dog", "followers_url": "https://api.github.com/users/smelly-dog/followers", "following_url": "https://api.github.com/users/smelly-dog/following{/other_user}", "gists_url": "https://api.github.com/users/smelly-dog/gists{/gist_id}", "starred_url": "https://api.github.com/users/smelly-dog/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/smelly-dog/subscriptions", "organizations_url": "https://api.github.com/users/smelly-dog/orgs", "repos_url": "https://api.github.com/users/smelly-dog/repos", "events_url": "https://api.github.com/users/smelly-dog/events{/privacy}", "received_events_url": "https://api.github.com/users/smelly-dog/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,591
1,591
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> Will different word get the same ids in tokenizer? Cause I just meet this situation. And it looks like this candidate is ['charge', 'greet', 'treat', 'reward'] and candidate_ids is [10813, 1, 13581, 1] <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3650/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3650/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3649
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3649/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3649/comments
https://api.github.com/repos/huggingface/transformers/issues/3649/events
https://github.com/huggingface/transformers/pull/3649
595,029,879
MDExOlB1bGxSZXF1ZXN0Mzk5NTczMTAx
3,649
Add model card for BERTeus
{ "login": "jjacampos", "id": 11363790, "node_id": "MDQ6VXNlcjExMzYzNzkw", "avatar_url": "https://avatars.githubusercontent.com/u/11363790?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jjacampos", "html_url": "https://github.com/jjacampos", "followers_url": "https://api.github.com/users/jjacampos/followers", "following_url": "https://api.github.com/users/jjacampos/following{/other_user}", "gists_url": "https://api.github.com/users/jjacampos/gists{/gist_id}", "starred_url": "https://api.github.com/users/jjacampos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jjacampos/subscriptions", "organizations_url": "https://api.github.com/users/jjacampos/orgs", "repos_url": "https://api.github.com/users/jjacampos/repos", "events_url": "https://api.github.com/users/jjacampos/events{/privacy}", "received_events_url": "https://api.github.com/users/jjacampos/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3649?src=pr&el=h1) Report\n> Merging [#3649](https://codecov.io/gh/huggingface/transformers/pull/3649?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2ee410560e45ae3c619dc1e0b0fc4d257c48e18a&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3649/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3649?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3649 +/- ##\n=======================================\n Coverage 78.28% 78.29% \n=======================================\n Files 104 104 \n Lines 17628 17628 \n=======================================\n+ Hits 13800 13801 +1 \n+ Misses 3828 3827 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3649?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3649?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3649?src=pr&el=footer). Last update [2ee4105...5b63e6a](https://codecov.io/gh/huggingface/transformers/pull/3649?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Awesome, thanks for sharing – also cc @joeddav \r\n\r\n[**Model page**](https://huggingface.co/ixa-ehu/berteus-base-cased)" ]
1,586
1,586
1,586
CONTRIBUTOR
null
This PR includes the model card for the BERTeus model which has been recently uploaded to the huggingface repository.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3649/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3649/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3649", "html_url": "https://github.com/huggingface/transformers/pull/3649", "diff_url": "https://github.com/huggingface/transformers/pull/3649.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3649.patch", "merged_at": 1586204486000 }
https://api.github.com/repos/huggingface/transformers/issues/3648
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3648/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3648/comments
https://api.github.com/repos/huggingface/transformers/issues/3648/events
https://github.com/huggingface/transformers/issues/3648
594,993,297
MDU6SXNzdWU1OTQ5OTMyOTc=
3,648
Chatbot QnA feature for given text corpus
{ "login": "shashankMadan-designEsthetics", "id": 45225143, "node_id": "MDQ6VXNlcjQ1MjI1MTQz", "avatar_url": "https://avatars.githubusercontent.com/u/45225143?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shashankMadan-designEsthetics", "html_url": "https://github.com/shashankMadan-designEsthetics", "followers_url": "https://api.github.com/users/shashankMadan-designEsthetics/followers", "following_url": "https://api.github.com/users/shashankMadan-designEsthetics/following{/other_user}", "gists_url": "https://api.github.com/users/shashankMadan-designEsthetics/gists{/gist_id}", "starred_url": "https://api.github.com/users/shashankMadan-designEsthetics/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shashankMadan-designEsthetics/subscriptions", "organizations_url": "https://api.github.com/users/shashankMadan-designEsthetics/orgs", "repos_url": "https://api.github.com/users/shashankMadan-designEsthetics/repos", "events_url": "https://api.github.com/users/shashankMadan-designEsthetics/events{/privacy}", "received_events_url": "https://api.github.com/users/shashankMadan-designEsthetics/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You might want to check this, It has a very nice example. [https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering) ", "> You might want to check this, It has a very nice example. https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering\r\n\r\nWell, I've known this solution for quite some time, this give outputs of start and end logits.\r\nBut i'd like to know if there is any implementation like if i just feed in just a corpus of text say a article, essay and so on and ask questions it would give some relevant outout" ]
1,586
1,592
1,592
NONE
null
Is there a way we can have a chatbot specifically which can answer questions on a given text corpus? Is there any transformer model which we can train for this and how?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3648/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3648/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3647
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3647/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3647/comments
https://api.github.com/repos/huggingface/transformers/issues/3647/events
https://github.com/huggingface/transformers/issues/3647
594,984,123
MDU6SXNzdWU1OTQ5ODQxMjM=
3,647
Bertabs metrics lower than paper
{ "login": "kyungyunji", "id": 20571305, "node_id": "MDQ6VXNlcjIwNTcxMzA1", "avatar_url": "https://avatars.githubusercontent.com/u/20571305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kyungyunji", "html_url": "https://github.com/kyungyunji", "followers_url": "https://api.github.com/users/kyungyunji/followers", "following_url": "https://api.github.com/users/kyungyunji/following{/other_user}", "gists_url": "https://api.github.com/users/kyungyunji/gists{/gist_id}", "starred_url": "https://api.github.com/users/kyungyunji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kyungyunji/subscriptions", "organizations_url": "https://api.github.com/users/kyungyunji/orgs", "repos_url": "https://api.github.com/users/kyungyunji/repos", "events_url": "https://api.github.com/users/kyungyunji/events{/privacy}", "received_events_url": "https://api.github.com/users/kyungyunji/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1990944155, "node_id": "MDU6TGFiZWwxOTkwOTQ0MTU1", "url": "https://api.github.com/repos/huggingface/transformers/labels/bertabs", "name": "bertabs", "color": "9ab22e", "default": false, "description": "" } ]
closed
false
null
[]
[ "bump. i have the same question. paper says 5 BeamSize and better accuracy for bert-base. ", "same here", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,598
1,598
NONE
null
I tested abstractive summarization pre-trained model using the source under transformers/examples/summarization/bertabs/... My dataset are CNN & Daily mail, which are 30 thousands of docs. However, the result of rouge score is as follows ****** ROUGE SCORES ****** ** ROUGE 1 F1 >> 0.275 Precision >> 0.299 Recall >> 0.260 ** ROUGE 2 F1 >> 0.161 Precision >> 0.184 Recall >> 0.149 ** ROUGE L F1 >> 0.305 Precision >> 0.326 Recall >> 0.290 why is the result different from that of the article, Text Summarization with Pretrained Encoders ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3647/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3647/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3646
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3646/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3646/comments
https://api.github.com/repos/huggingface/transformers/issues/3646/events
https://github.com/huggingface/transformers/issues/3646
594,974,707
MDU6SXNzdWU1OTQ5NzQ3MDc=
3,646
Allow token regression in ForTokenClassification models
{ "login": "gsarti", "id": 16674069, "node_id": "MDQ6VXNlcjE2Njc0MDY5", "avatar_url": "https://avatars.githubusercontent.com/u/16674069?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gsarti", "html_url": "https://github.com/gsarti", "followers_url": "https://api.github.com/users/gsarti/followers", "following_url": "https://api.github.com/users/gsarti/following{/other_user}", "gists_url": "https://api.github.com/users/gsarti/gists{/gist_id}", "starred_url": "https://api.github.com/users/gsarti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gsarti/subscriptions", "organizations_url": "https://api.github.com/users/gsarti/orgs", "repos_url": "https://api.github.com/users/gsarti/repos", "events_url": "https://api.github.com/users/gsarti/events{/privacy}", "received_events_url": "https://api.github.com/users/gsarti/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I think that's a reasonable feature – Thoughts @LysandreJik @thomwolf?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,591
1,591
CONTRIBUTOR
null
# πŸš€ Feature request Current `ForTokenClassification` class implementation for all models does not support regression. I propose to adapt the current implementation for all the `ForTokenClassification` models in order to enable out-of-the-box token regression when `num_labels == 1`, similarly to what is currently available in the `ForSentenceClassification` models. Concretely, this would mean converting this (taken from `AlbertForTokenRegression` as an example, line 873 in `modeling_albert.py`): ```python if labels is not None: loss_fct = CrossEntropyLoss() # Only keep active parts of the loss if attention_mask is not None: active_loss = attention_mask.view(-1) == 1 active_logits = logits.view(-1, self.num_labels)[active_loss] active_labels = labels.view(-1)[active_loss] loss = loss_fct(active_logits, active_labels) else: loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) outputs = (loss,) + outputs return outputs # (loss), logits, (hidden_states), (attentions) ``` into something like this: ```python if labels is not None: if self.num_labels == 1: # We are doing regression loss_fct = MSELoss() logits_view = logits.view(-1) else: # We are doing classification loss_fct = CrossEntropyLoss() logits_view = logits.view(-1, self.num_labels) # Only keep active parts of the loss if attention_mask is not None: active_loss = attention_mask.view(-1) == 1 active_logits = logits_view[active_loss] active_labels = labels.view(-1)[active_loss] loss = loss_fct(active_logits, active_labels) else: loss = loss_fct(logits_view, labels.view(-1)) outputs = (loss,) + outputs return outputs # (loss), logits, (hidden_states), (attentions) ``` ## Motivation I am currently working with token-level regression using multiple transformer models to predict eye-tracking metrics that are commonly considered as proxies for cognitive processing in psycholinguistics (e.g. word reading times, fixation counts, etc.). Given that most of those are continuous metrics, the ability to use `transformers` for token regression would make my work much faster. Moreover, I believe that this functionality can benefit other researchers working with token-level continuous metrics. ## Your contribution If this feature is regarded as interesting by maintainers, I can submit a PR with the suggested changes applied to all currently supported models.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3646/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3645
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3645/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3645/comments
https://api.github.com/repos/huggingface/transformers/issues/3645/events
https://github.com/huggingface/transformers/issues/3645
594,933,513
MDU6SXNzdWU1OTQ5MzM1MTM=
3,645
❓ How to run pipeline (summarization) in FP16 mode ?
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sshleifer - do you have more info on that? ", "You can't without editing the code, unfortunately.\r\n", "Running a pipeline in FP16 mode would be really useful for optimizing the GPU RAM usage. Can this be turned into a feature request?\r\n\r\n**Edit:** I just found out that the following works:\r\n\r\n```\r\npipeline.model.half()\r\n```" ]
1,586
1,653
1,586
CONTRIBUTOR
null
# ❓ Questions & Help I couldn't find on the documentation any parameter that allow running a pipeline in FP16 mode. **Did I miss it or it's not a feature yet ?**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3645/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3644
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3644/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3644/comments
https://api.github.com/repos/huggingface/transformers/issues/3644/events
https://github.com/huggingface/transformers/issues/3644
594,906,632
MDU6SXNzdWU1OTQ5MDY2MzI=
3,644
How can I track the performance of my GPT-2 model during finetuning?
{ "login": "hmdgit", "id": 59701320, "node_id": "MDQ6VXNlcjU5NzAxMzIw", "avatar_url": "https://avatars.githubusercontent.com/u/59701320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hmdgit", "html_url": "https://github.com/hmdgit", "followers_url": "https://api.github.com/users/hmdgit/followers", "following_url": "https://api.github.com/users/hmdgit/following{/other_user}", "gists_url": "https://api.github.com/users/hmdgit/gists{/gist_id}", "starred_url": "https://api.github.com/users/hmdgit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hmdgit/subscriptions", "organizations_url": "https://api.github.com/users/hmdgit/orgs", "repos_url": "https://api.github.com/users/hmdgit/repos", "events_url": "https://api.github.com/users/hmdgit/events{/privacy}", "received_events_url": "https://api.github.com/users/hmdgit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "[run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) allow you to track you training performance (validation perplexity, training loss and learning rate) using tensorboad. You need to have tensorboad installed. And then you can just run `tensorboard --logdir=runs` to follow your training.", "Thanks a lot. It works. \r\nI have used these commands in Google Colab\r\n```\r\n%load_ext tensorboard\r\n%tensorboard --logdir=runs\r\n```" ]
1,586
1,586
1,586
NONE
null
Hi, I am new in using Transformer HugginFace library. I am using [Google Colab](https://colab.research.google.com/github/interactive-fiction-class/interactive-fiction-class.github.io/blob/master/homeworks/language-model/hw4_transformer.ipynb) to fine tune GPT-2 model. Google Colab only displays the output of last 5000 lines. So, I could not be able to figure out the performance of previous checkpoints, whose output vanishes. I would like to track the performance of my training model during the whole period of time. Is it possible to track it by using **tensorboard**? or is there any other way exist? I have noticed a variable "logging_steps" in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) file, but I donot have an idea how can I use it to track performance of training model in Google Colab?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3644/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3644/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3643
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3643/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3643/comments
https://api.github.com/repos/huggingface/transformers/issues/3643/events
https://github.com/huggingface/transformers/pull/3643
594,738,706
MDExOlB1bGxSZXF1ZXN0Mzk5MzI4OTUw
3,643
BioMed Roberta-Base (AllenAI)
{ "login": "kernelmachine", "id": 1164135, "node_id": "MDQ6VXNlcjExNjQxMzU=", "avatar_url": "https://avatars.githubusercontent.com/u/1164135?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kernelmachine", "html_url": "https://github.com/kernelmachine", "followers_url": "https://api.github.com/users/kernelmachine/followers", "following_url": "https://api.github.com/users/kernelmachine/following{/other_user}", "gists_url": "https://api.github.com/users/kernelmachine/gists{/gist_id}", "starred_url": "https://api.github.com/users/kernelmachine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kernelmachine/subscriptions", "organizations_url": "https://api.github.com/users/kernelmachine/orgs", "repos_url": "https://api.github.com/users/kernelmachine/repos", "events_url": "https://api.github.com/users/kernelmachine/events{/privacy}", "received_events_url": "https://api.github.com/users/kernelmachine/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3643?src=pr&el=h1) Report\n> Merging [#3643](https://codecov.io/gh/huggingface/transformers/pull/3643?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1789c7daf1b8013006b0aef6cb1b8f80573031c5&el=desc) will **increase** coverage by `0.94%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3643/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3643?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3643 +/- ##\n==========================================\n+ Coverage 77.32% 78.26% +0.94% \n==========================================\n Files 104 104 \n Lines 17628 17628 \n==========================================\n+ Hits 13630 13796 +166 \n+ Misses 3998 3832 -166 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3643?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3643/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3643/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3643/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3643/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3643/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3643?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3643?src=pr&el=footer). Last update [1789c7d...1d980d5](https://codecov.io/gh/huggingface/transformers/pull/3643?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Slightly tweaked and merged! [**model page**](https://huggingface.co/allenai/biomed_roberta_base)" ]
1,586
1,586
1,586
CONTRIBUTOR
null
This PR includes the model card for Biomed-roberta base, which @kyleclo recently uploaded to allenai's huggingface model repository.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3643/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3643/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3643", "html_url": "https://github.com/huggingface/transformers/pull/3643", "diff_url": "https://github.com/huggingface/transformers/pull/3643.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3643.patch", "merged_at": 1586203930000 }
https://api.github.com/repos/huggingface/transformers/issues/3642
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3642/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3642/comments
https://api.github.com/repos/huggingface/transformers/issues/3642/events
https://github.com/huggingface/transformers/pull/3642
594,737,356
MDExOlB1bGxSZXF1ZXN0Mzk5MzI3ODQx
3,642
Fix roberta checkpoint conversion script
{ "login": "myleott", "id": 231798, "node_id": "MDQ6VXNlcjIzMTc5OA==", "avatar_url": "https://avatars.githubusercontent.com/u/231798?v=4", "gravatar_id": "", "url": "https://api.github.com/users/myleott", "html_url": "https://github.com/myleott", "followers_url": "https://api.github.com/users/myleott/followers", "following_url": "https://api.github.com/users/myleott/following{/other_user}", "gists_url": "https://api.github.com/users/myleott/gists{/gist_id}", "starred_url": "https://api.github.com/users/myleott/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/myleott/subscriptions", "organizations_url": "https://api.github.com/users/myleott/orgs", "repos_url": "https://api.github.com/users/myleott/repos", "events_url": "https://api.github.com/users/myleott/events{/privacy}", "received_events_url": "https://api.github.com/users/myleott/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3642?src=pr&el=h1) Report\n> Merging [#3642](https://codecov.io/gh/huggingface/transformers/pull/3642?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1789c7daf1b8013006b0aef6cb1b8f80573031c5&el=desc) will **increase** coverage by `0.97%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3642/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3642?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3642 +/- ##\n==========================================\n+ Coverage 77.32% 78.29% +0.97% \n==========================================\n Files 104 104 \n Lines 17628 17628 \n==========================================\n+ Hits 13630 13801 +171 \n+ Misses 3998 3827 -171 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3642?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.45% <0.00%> (+0.81%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3642?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3642?src=pr&el=footer). Last update [1789c7d...bd60e83](https://codecov.io/gh/huggingface/transformers/pull/3642?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Hi @myleott, thanks for looking into this! Indeed the conversion script is failing due to that bias. The checkpoints on the S3 do not need to be re-uploaded, it was only the conversion of the new checkpoints that needed to be updated.\r\n\r\nI manually checked that we have the same results between the torch hub models and those hosted on our S3 + we have [integration tests that test just that ](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_roberta.py#L322) :)\r\n\r\nThanks for your fix! " ]
1,586
1,586
1,586
CONTRIBUTOR
null
After #2521 and #2958, this script stopped working. We need to set the bias on the new `decoder` Linear directly. cc @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3642/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3642/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3642", "html_url": "https://github.com/huggingface/transformers/pull/3642", "diff_url": "https://github.com/huggingface/transformers/pull/3642.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3642.patch", "merged_at": 1586275403000 }
https://api.github.com/repos/huggingface/transformers/issues/3641
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3641/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3641/comments
https://api.github.com/repos/huggingface/transformers/issues/3641/events
https://github.com/huggingface/transformers/issues/3641
594,631,224
MDU6SXNzdWU1OTQ2MzEyMjQ=
3,641
Can't evaluate official TensorFlow NER model
{ "login": "TarasPriadka", "id": 14134797, "node_id": "MDQ6VXNlcjE0MTM0Nzk3", "avatar_url": "https://avatars.githubusercontent.com/u/14134797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TarasPriadka", "html_url": "https://github.com/TarasPriadka", "followers_url": "https://api.github.com/users/TarasPriadka/followers", "following_url": "https://api.github.com/users/TarasPriadka/following{/other_user}", "gists_url": "https://api.github.com/users/TarasPriadka/gists{/gist_id}", "starred_url": "https://api.github.com/users/TarasPriadka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TarasPriadka/subscriptions", "organizations_url": "https://api.github.com/users/TarasPriadka/orgs", "repos_url": "https://api.github.com/users/TarasPriadka/repos", "events_url": "https://api.github.com/users/TarasPriadka/events{/privacy}", "received_events_url": "https://api.github.com/users/TarasPriadka/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Another comment, I have looked at the cache that the model outputs, and it has a bunch of question marks for all of the cache files(train,dev,test). That makes sense as it is cached, but still I am thinking that I might be doing something wrong. If someone knows where I might have made a mistake, please let me know.", "This has been fixed in this PR https://github.com/fastai/fastprogress/pull/59\r\nYou will have to update fastprogress to the latest build on master for it to work.\r\n\r\nThis worked for me:\r\n```\r\npip uninstall fastprogress\r\npip install git+https://github.com/fastai/fastprogress.git\r\n```\r\n\r\nedit: the correct install instruction this time", "Thank you! I will check if it got fixed for me and close the issue.", "@apcode Thanks it worked wonderful! Closing the issue." ]
1,586
1,587
1,587
NONE
null
# πŸ› Bug: Can't evaluate official TensorFlow NER model ## Information Model I am using (Bert, XLNet ...): I am using bert-base-multilingual-cased Language I am using the model on (English, Chinese ...): German The problem arises when using: * [X] the official example scripts: I was using the official script for the NER model training. at this link https://github.com/huggingface/transformers/tree/master/examples/ner * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: I was training an NER model * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Follow the steps to download the data and run the model for TensorFlow 2. Training the model for 3 epochs 3. On evaluation, the model will fail 4. Then I tried to explicitly call the evaluation of the model. Using this: python3 run_tf_ner.py --data_dir ~/data --model_type bert --labels ~/data/labels.txt --model_name_or_path $BERT_MODEL --output_dir $OUTPUT_DIR --max_seq_length $MAX_LENGTH --num_train_epochs $NUM_EPOCHS --per_device_train_batch_size $BATCH_SIZE --save_steps $SAVE_STEPS --seed $SEED --do_eval Here is what I see when calling the evaluation step: ```I0405 20:15:11.758301 140712645343040 modeling_tf_utils.py:388] loading weights file germeval-model/tf_model.h5 2020-04-05 20:15:12.024952: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2020-04-05 20:15:12.031397: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2200000000 Hz 2020-04-05 20:15:12.032399: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3ae8970 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-04-05 20:15:12.032438: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version I0405 20:15:15.697083 140712645343040 run_tf_ner.py:418] Loading features from cached file /home/taras/data/cached_dev_bert-base-multilingual-cased_128.tf_record Traceback (most recent call last): File "run_tf_ner.py", line 641, in <module> app.run(main) File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 299, in run _run_main(main, args) File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "run_tf_ner.py", line 576, in main args, strategy, model, tokenizer, labels, pad_token_label_id, mode="dev" File "run_tf_ner.py", line 314, in evaluate eval_iterator = progress_bar(eval_dataset, total=num_eval_steps, parent=master, display=args["n_device"] > 1) File "/home/taras/.local/lib/python3.7/site-packages/fastprogress/fastprogress.py", line 226, in __init__ super().__init__(gen, total, display, leave, parent, master) File "/home/taras/.local/lib/python3.7/site-packages/fastprogress/fastprogress.py", line 24, in __init__ parent.add_child(self) File "/home/taras/.local/lib/python3.7/site-packages/fastprogress/fastprogress.py", line 264, in add_child self.child.prefix = f'Epoch {self.main_bar.last_v+1}/{self.main_bar.total} :' TypeError: unsupported operand type(s) for +: 'NoneType' and 'int' ``` ## Expected behavior Train an NER model and be able to evaluate and predict using the trained weights. ## Environment info - `transformers` version: 2.7.0 - Platform: Linux-4.19.0-8-cloud-amd64-x86_64-with-debian-10.3 - Python version: 3.7.3 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.2.0-rc2 (False) - Using GPU in script?: Yes, Tesla K80 - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3641/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3641/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3640
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3640/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3640/comments
https://api.github.com/repos/huggingface/transformers/issues/3640/events
https://github.com/huggingface/transformers/issues/3640
594,557,277
MDU6SXNzdWU1OTQ1NTcyNzc=
3,640
Wrong Mask LM prediction with BertForMaskedLM
{ "login": "AtmaHou", "id": 15045402, "node_id": "MDQ6VXNlcjE1MDQ1NDAy", "avatar_url": "https://avatars.githubusercontent.com/u/15045402?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AtmaHou", "html_url": "https://github.com/AtmaHou", "followers_url": "https://api.github.com/users/AtmaHou/followers", "following_url": "https://api.github.com/users/AtmaHou/following{/other_user}", "gists_url": "https://api.github.com/users/AtmaHou/gists{/gist_id}", "starred_url": "https://api.github.com/users/AtmaHou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AtmaHou/subscriptions", "organizations_url": "https://api.github.com/users/AtmaHou/orgs", "repos_url": "https://api.github.com/users/AtmaHou/repos", "events_url": "https://api.github.com/users/AtmaHou/events{/privacy}", "received_events_url": "https://api.github.com/users/AtmaHou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is probably because you're not using special tokens at all. When using BERT you should add the `[CLS]` and `[SEP]` tokens at the appropriate places. Modifying your code to include these generates the correct answer:\r\n\r\n```py\r\n# Tokenized input\r\ntext = \"Who was Jim Henson ? Jim Henson was a puppeteer\"\r\ntokenized_text = tokenizer.tokenize(text)\r\n\r\nmasked_index = 7 # <-- the masked index needs to be offset by 1 because a [CLS] token will be added at the beginning\r\ntokenized_text[masked_index] = '[MASK]'\r\n\r\n# Convert token to vocabulary indices\r\nindexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)\r\nindexed_tokens = tokenizer.build_inputs_with_special_tokens(indexed_tokens) # <-- should add special tokens, this method does it\r\n\r\n\r\nsegments_ids = [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] # <-- modify this to include special tokens\r\ninput_mask = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] # <-- modify this to include special tokens\r\n\r\n# ======== Convert inputs to PyTorch tensors\r\ntokens_tensor = torch.tensor([indexed_tokens])\r\nsegments_tensors = torch.tensor([segments_ids])\r\n\r\n# ======== predict tokens ========\r\nprint('== LM predicting ===')\r\n# Load pre-trained model (weights)\r\nmodel = BertForMaskedLM.from_pretrained(MODEL_PATH)\r\nmodel.eval()\r\n\r\n# Predict all tokens\r\npredictions = model(tokens_tensor, segments_tensors)[0]\r\n\r\n# confirm we were able to predict 'henson'\r\npredicted_index = torch.argmax(predictions[0, masked_index]).item()\r\npredicted_token = tokenizer.convert_ids_to_tokens([predicted_index])\r\nprint('predicted_token', predicted_token)\r\n```\r\n\r\nResult:\r\n\r\n```\r\n== tokenizing ===\r\n== LM predicting ===\r\npredicted_token ['henson']\r\n```\r\n\r\n\r\nPlease note that there is a much simpler way of doing what you did, by using the `encode` method which automatically manage the special tokens. The `encode_plus` method manages the attention mask and segment IDs as well. Here's the full code using the `encode_plus` method:\r\n\r\n```py\r\nimport torch\r\nfrom transformers import BertTokenizer, BertModel, BertForMaskedLM, AutoModel, AutoTokenizer, AutoModelWithLMHead, ElectraModel, ElectraForMaskedLM\r\n\r\nMODEL_PATH = 'bert-base-uncased'\r\n\r\nVOCAB = MODEL_PATH\r\n\r\nprint('== tokenizing ===')\r\ntokenizer = BertTokenizer.from_pretrained(VOCAB)\r\n\r\n# Tokenized input\r\ntext = \"Who was Jim Henson ? Jim [MASK] was a puppeteer\"\r\ninputs = tokenizer.encode_plus(text, return_tensors=\"pt\")\r\n\r\nmasked_index = 7\r\n\r\nmodel = BertForMaskedLM.from_pretrained(MODEL_PATH)\r\nmodel.eval()\r\n\r\nprint('== LM predicting ===')\r\n# Predict all tokens\r\npredictions = model(**inputs)[0]\r\n\r\n# confirm we were able to predict 'henson'\r\npredicted_index = torch.argmax(predictions[0, masked_index]).item()\r\npredicted_token = tokenizer.convert_ids_to_tokens([predicted_index])\r\nprint('predicted_token', predicted_token)\r\n\r\n```", "Thanks a lot for the answer! I can move on my projects. \r\nIt is still kind of weird that the old code works correctly without '[CLS]' and '[SEP]'. \r\nHas some underlying code logic changed?", "It's weird, I agree! This is the correct way to do it, and that's the way it should have been done in the previous versions as well, though. Glad you could get your code working!" ]
1,586
1,587
1,587
NONE
null
# πŸ“š Migration ## Information <!-- Important information --> Model I am using (Bert, XLNet ...): Bert, Electra Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## Details ### When I transfer to transformers=2.7.0, I find that the LM model failed to predict correct masked tokens. I test the **transformer** model on the old LM example of **pytorch_pretrained_bert**: "Who was Jim Henson ? Jim Henson was a puppeteer" My test code goes like following: ```python # coding: utf-8 import torch from transformers import BertTokenizer, BertModel, BertForMaskedLM, AutoModel, AutoTokenizer, AutoModelWithLMHead, ElectraModel, ElectraForMaskedLM MODEL_PATH = 'Resources/bert-base-uncased/uncased_L-12_H-768_A-12/' VOCAB = MODEL_PATH print('== tokenizing ===') tokenizer = BertTokenizer.from_pretrained(VOCAB) # Tokenized input text = "Who was Jim Henson ? Jim Henson was a puppeteer" tokenized_text = tokenizer.tokenize(text) masked_index = 6 tokenized_text[masked_index] = '[MASK]' # Convert token to vocabulary indices indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) segments_ids = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1] input_mask = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] # ======== Convert inputs to PyTorch tensors tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) # ======== predict tokens ======== print('== LM predicting ===') # Load pre-trained model (weights) model = BertForMaskedLM.from_pretrained(MODEL_PATH) model.eval() # Predict all tokens predictions = model(tokens_tensor, segments_tensors)[0] # confirm we were able to predict 'henson' predicted_index = torch.argmax(predictions[0, masked_index]).item() predicted_token = tokenizer.convert_ids_to_tokens([predicted_index]) print('predicted_token', predicted_token) ``` ## Other Details (1) Such testing code works fine with the previous version of **pytorch_pretrained_bert**. But now it seems that model predicts a random token. (2) Random predicting also happened when I load electra model with ElectraForMaskedLM. <!-- A clear and concise description of the migration issue. If you have code snippets, please provide it here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.7.0 - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): 1.2-gpu - Tensorflow version (GPU?): no - Using GPU in script?: no - Using distributed or parallel set-up in script?: no <!-- IMPORTANT: which version of the former library do you use? --> * `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): pytorch-pretrained-bert ## Checklist - [x] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [x] I checked if a related official extension example runs on my machine.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3640/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3640/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3639
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3639/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3639/comments
https://api.github.com/repos/huggingface/transformers/issues/3639/events
https://github.com/huggingface/transformers/issues/3639
594,543,059
MDU6SXNzdWU1OTQ1NDMwNTk=
3,639
Summarization pipeline - Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large-cnn/modelcard.json'
{ "login": "metahgva", "id": 9355520, "node_id": "MDQ6VXNlcjkzNTU1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/9355520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/metahgva", "html_url": "https://github.com/metahgva", "followers_url": "https://api.github.com/users/metahgva/followers", "following_url": "https://api.github.com/users/metahgva/following{/other_user}", "gists_url": "https://api.github.com/users/metahgva/gists{/gist_id}", "starred_url": "https://api.github.com/users/metahgva/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/metahgva/subscriptions", "organizations_url": "https://api.github.com/users/metahgva/orgs", "repos_url": "https://api.github.com/users/metahgva/repos", "events_url": "https://api.github.com/users/metahgva/events{/privacy}", "received_events_url": "https://api.github.com/users/metahgva/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[ { "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false } ]
[ "@julien-c This is issue has been closed, but I continue to get the same exact error. Can you reopen it? Or do I have to open a new issue?", "Did you update from master. \r\n\r\nCan you paste the output of `transformers-cli env`", "I just upgraded to the latest version:\r\n```\r\n- `transformers` version: 2.8.0\r\n- Platform: Linux-5.3.0-46-generic-x86_64-with-Ubuntu-19.10-eoan\r\n- Python version: 3.7.5\r\n- PyTorch version (GPU?): 1.1.0 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: No\r\n```", "The fix is not in a released version yet so you need to install from source." ]
1,586
1,586
1,586
NONE
null
# πŸ› Bug ## Information Loading the summarization pipeline will result in below assertion: > Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large-cnn/modelcard.json' to download model card file. > Creating an empty model card. The problem arises when using: * [V] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: `!pip install -q transformers --upgrade from transformers import pipeline summarizer = pipeline(task="summarization")` 1. Execute above code <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3639/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3639/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3638
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3638/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3638/comments
https://api.github.com/repos/huggingface/transformers/issues/3638/events
https://github.com/huggingface/transformers/issues/3638
594,479,175
MDU6SXNzdWU1OTQ0NzkxNzU=
3,638
Translation pipeline bug after 398 characters
{ "login": "MoritzLaurer", "id": 41862082, "node_id": "MDQ6VXNlcjQxODYyMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MoritzLaurer", "html_url": "https://github.com/MoritzLaurer", "followers_url": "https://api.github.com/users/MoritzLaurer/followers", "following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}", "gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}", "starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions", "organizations_url": "https://api.github.com/users/MoritzLaurer/orgs", "repos_url": "https://api.github.com/users/MoritzLaurer/repos", "events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}", "received_events_url": "https://api.github.com/users/MoritzLaurer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hi @MoritzLaurer, \r\n\r\nI played around with your example a bit and I don't get a good translation either! I think one of the main problems is that T5 was pretrained on a per sentence level - not on whole texts. \r\nTherefore you get quite good results when you split your text into sentences and translate each sentence on its own as follows: \r\n\r\n\r\n```\r\nfrom transformers import pipeline\r\ntranslator_de = pipeline(task='translation_en_to_de')\r\n\r\ntext_en = \"The 2019–20 coronavirus pandemic is an ongoing pandemic of coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2).[6] The outbreak was first identified in Wuhan, Hubei, China, in December 2019. The World Health Organization (WHO) declared the outbreak to be a Public Health Emergency of International Concern on 30 January 2020 and recognized it as a pandemic on 11 March.[7][8] As of 30 March 2020, more than 745,000[4] cases of COVID-19 have been reported in over 190 countries and territories, resulting in approximately 35,000[4] deaths. More than 156,500[4] people have since recovered.[5]\"\r\n\r\ntranslation_list = []\r\ntext_list = text_en.split('.')\r\n\r\nfor text in text_list:\r\n translation_list.append(translator_de(text + '.'))\r\n```\r\n\r\nI would actually always do this when using T5 on translation and then concatenate the sentences back together afterward. It's very rare that you need to know the previous or next sentence in order to get good translation results.\r\n\r\nPS:\r\nYou have to be careful when using `len(text_en)` it gives you the number of characters in your string not the number of words. Also note that `min_length` and `max_length` represent the number of minimal and maximal tokens (which is usually a bit less than the number of words). ", "Here the results, I got in German: \r\n\r\n```\r\n[[{'translation_text': 'Die Koronavirus-Pandemie 2019–20 ist eine anhaltende Pandemie der Koronavirus-Krankheit 2019 (COVID-19), verursacht durch das schwere akute Atemwegssyndrom Koronavirus 2 (SARS-CoV-2).'}],\r\n [{'translation_text': '[6] Der Ausbruch wurde erstmals im Dezember 2019 in Wuhan, Hubei, China, festgestellt.'}],\r\n [{'translation_text': 'Die Weltgesundheitsorganisation (WHO) hat den Ausbruch am 30. Januar 2020 als ΓΆffentlichen Gesundheitsnotstand von internationaler Bedeutung erklΓ€rt und ihn am 11. MΓ€rz als Pandemie anerkannt.'}],\r\n [{'translation_text': '[7][8] Zum 30. MΓ€rz 2020 wurden in ΓΌber 190 LΓ€ndern und Gebieten mehr als 745 000 FΓ€lle von COVID-19 gemeldet, was zu etwa 35 000 TodesfΓ€llen fΓΌhrte.'}],\r\n [{'translation_text': 'Mehr als 156.500[4] Menschen haben sich seitdem erholt.'}],\r\n [{'translation_text': '[5].'}]]\r\n```\r\n", "Hi @patrickvonplaten, \r\n\r\nGreat, thank you very much for the response! It makes sense and splitting in sentences seems like a good solution. (also thanks for clarifying that min_length refers to tokens and not characters)" ]
1,586
1,586
1,586
NONE
null
# πŸ› Bug The translation pipeline with T5 does not seem to allow longer translations than 400~ character. It either automatically stops at 398, or if I play with the min/max_length parameters, it produces gibberish after 400~ characters. ## Information I am using the translation pipeline (T5) I tried both: translator_de = pipeline(task='translation_en_to_de') translator_fr = pipeline(task='translation_en_to_fr') I tried the different suggestions in this short twitter discussion, but couldn't get it to work: https://twitter.com/PatrickPlaten/status/1244747294664200193 ## To reproduce Steps to reproduce the behavior: ```Python translator_de = pipeline(task='translation_en_to_de') text_en = "The 2019–20 coronavirus pandemic is an ongoing pandemic of coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2).[6] The outbreak was first identified in Wuhan, Hubei, China, in December 2019. The World Health Organization (WHO) declared the outbreak to be a Public Health Emergency of International Concern on 30 January 2020 and recognized it as a pandemic on 11 March.[7][8] As of 30 March 2020, more than 745,000[4] cases of COVID-19 have been reported in over 190 countries and territories, resulting in approximately 35,000[4] deaths. More than 156,500[4] people have since recovered.[5]" text_trans_de = translator_de(text_en, min_length=len(text_en), early_stopping=False) text_trans_de[0]['translation_text'] ``` Output: 'Zu den BemΓΌhungen, die Ausbreitung des Virus zu verhindern, zΓ€hlen ReisebeschrΓ€nkungen, QuarantΓ€ne, Sperrzeiten, Arbeitsplatz-Gefahrkontrollen, Verschiebungen und Annullierungen von Veranstaltungen und Anlagenschließungen, darunter die QuarantΓ€ne in Hubei, nationale oder regionale QuarantΓ€ne in anderen Teilen der Welt, Sperrmaßnahmen in China und SΓΌdkorea, verschiedene Grenzschließungen oder Einreise\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad' ## Expected behavior Ideally, it would allow me to translate text of any length. ## Environment info - `transformers` version: 2.7.0 - Platform: MacOS Catalania 10.15.3 (19D76) - Python version: 7.3 - Using GPU in script?: No, CPU - Using distributed or parallel set-up in script?: No.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3638/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3638/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3637
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3637/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3637/comments
https://api.github.com/repos/huggingface/transformers/issues/3637/events
https://github.com/huggingface/transformers/pull/3637
594,377,820
MDExOlB1bGxSZXF1ZXN0Mzk5MDEwOTMz
3,637
[TransfoXL] fix argument order of update_mems fn in TF version
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3637?src=pr&el=h1) Report\n> Merging [#3637](https://codecov.io/gh/huggingface/transformers/pull/3637?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ab8ab4f50baf391612cbc78cfa3f09b7ad0c3ac&el=desc) will **increase** coverage by `0.94%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3637/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3637?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3637 +/- ##\n==========================================\n+ Coverage 77.34% 78.29% +0.94% \n==========================================\n Files 104 104 \n Lines 17628 17628 \n==========================================\n+ Hits 13634 13801 +167 \n+ Misses 3994 3827 -167 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3637?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `89.15% <100.00%> (ΓΈ)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.45% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3637?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3637?src=pr&el=footer). Last update [4ab8ab4...8db7ebd](https://codecov.io/gh/huggingface/transformers/pull/3637?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,586
1,586
1,586
MEMBER
null
Wrong argument order of function. Thanks @dmytyar !
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3637/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3637/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3637", "html_url": "https://github.com/huggingface/transformers/pull/3637", "diff_url": "https://github.com/huggingface/transformers/pull/3637.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3637.patch", "merged_at": 1586082822000 }
https://api.github.com/repos/huggingface/transformers/issues/3636
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3636/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3636/comments
https://api.github.com/repos/huggingface/transformers/issues/3636/events
https://github.com/huggingface/transformers/pull/3636
594,373,202
MDExOlB1bGxSZXF1ZXN0Mzk5MDA2Njg5
3,636
[Docs, T5] Fix TF T5 examples docstring
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,586
1,586
1,586
MEMBER
null
Update TF T5 docstring - since forgotten to do so in: #3547
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3636/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3636/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3636", "html_url": "https://github.com/huggingface/transformers/pull/3636", "diff_url": "https://github.com/huggingface/transformers/pull/3636.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3636.patch", "merged_at": 1586082189000 }
https://api.github.com/repos/huggingface/transformers/issues/3635
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3635/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3635/comments
https://api.github.com/repos/huggingface/transformers/issues/3635/events
https://github.com/huggingface/transformers/issues/3635
594,345,396
MDU6SXNzdWU1OTQzNDUzOTY=
3,635
Reinitializing layers in BERT
{ "login": "paul-you", "id": 23263212, "node_id": "MDQ6VXNlcjIzMjYzMjEy", "avatar_url": "https://avatars.githubusercontent.com/u/23263212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/paul-you", "html_url": "https://github.com/paul-you", "followers_url": "https://api.github.com/users/paul-you/followers", "following_url": "https://api.github.com/users/paul-you/following{/other_user}", "gists_url": "https://api.github.com/users/paul-you/gists{/gist_id}", "starred_url": "https://api.github.com/users/paul-you/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/paul-you/subscriptions", "organizations_url": "https://api.github.com/users/paul-you/orgs", "repos_url": "https://api.github.com/users/paul-you/repos", "events_url": "https://api.github.com/users/paul-you/events{/privacy}", "received_events_url": "https://api.github.com/users/paul-you/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,591
1,591
NONE
null
Hello, I have a question regarding re-initialising the encoder layers in BERT. What happens if I call the __init__() method of a BERT layer, is the the layer re-initialised using the pre-trained BERT weights or does it get completely new weights ? `model.bert.encoder.layer[0].__init__(config)`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3635/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3635/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3634
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3634/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3634/comments
https://api.github.com/repos/huggingface/transformers/issues/3634/events
https://github.com/huggingface/transformers/issues/3634
594,344,895
MDU6SXNzdWU1OTQzNDQ4OTU=
3,634
Custom collate function that pads only to the longest sequence?
{ "login": "ZhaofengWu", "id": 11954789, "node_id": "MDQ6VXNlcjExOTU0Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhaofengWu", "html_url": "https://github.com/ZhaofengWu", "followers_url": "https://api.github.com/users/ZhaofengWu/followers", "following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}", "gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions", "organizations_url": "https://api.github.com/users/ZhaofengWu/orgs", "repos_url": "https://api.github.com/users/ZhaofengWu/repos", "events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}", "received_events_url": "https://api.github.com/users/ZhaofengWu/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,591
1,591
CONTRIBUTOR
null
Currenty all input is padded to `max_seq_length` but in most cases the longest sequence in a batch is shorter than that, sometimes by a significant amount. If there is a custom collate function that pads only to the longest sequence in a batch, that will probably save quite some memory and time. Is that feasible?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3634/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3633
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3633/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3633/comments
https://api.github.com/repos/huggingface/transformers/issues/3633/events
https://github.com/huggingface/transformers/issues/3633
594,279,098
MDU6SXNzdWU1OTQyNzkwOTg=
3,633
Cased model + `--do_lower_case` in documentation?
{ "login": "ZhaofengWu", "id": 11954789, "node_id": "MDQ6VXNlcjExOTU0Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhaofengWu", "html_url": "https://github.com/ZhaofengWu", "followers_url": "https://api.github.com/users/ZhaofengWu/followers", "following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}", "gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions", "organizations_url": "https://api.github.com/users/ZhaofengWu/orgs", "repos_url": "https://api.github.com/users/ZhaofengWu/repos", "events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}", "received_events_url": "https://api.github.com/users/ZhaofengWu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,586
1,586
1,586
CONTRIBUTOR
null
The [examples README](https://github.com/huggingface/transformers/tree/master/examples/README.md) has a lot of examples using both a cased model and the `--do_lower_case` option. Is that an error?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3633/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3632
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3632/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3632/comments
https://api.github.com/repos/huggingface/transformers/issues/3632/events
https://github.com/huggingface/transformers/pull/3632
594,089,714
MDExOlB1bGxSZXF1ZXN0Mzk4NzUxMzEz
3,632
[Bart] Replace config.output_past with use_cache kwarg
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3632?src=pr&el=h1) Report\n> Merging [#3632](https://codecov.io/gh/huggingface/transformers/pull/3632?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ab8ab4f50baf391612cbc78cfa3f09b7ad0c3ac&el=desc) will **increase** coverage by `0.94%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3632/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3632?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3632 +/- ##\n==========================================\n+ Coverage 77.34% 78.29% +0.94% \n==========================================\n Files 104 104 \n Lines 17628 17629 +1 \n==========================================\n+ Hits 13634 13802 +168 \n+ Misses 3994 3827 -167 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3632?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `100.00% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.61% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.45% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3632?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3632?src=pr&el=footer). Last update [4ab8ab4...904b387](https://codecov.io/gh/huggingface/transformers/pull/3632?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I think these if statements in tests are not needed anymore now and can be removed:\r\nhttps://github.com/huggingface/transformers/blob/4ab8ab4f50baf391612cbc78cfa3f09b7ad0c3ac/tests/test_modeling_common.py#L632\r\nand \r\nhttps://github.com/huggingface/transformers/blob/4ab8ab4f50baf391612cbc78cfa3f09b7ad0c3ac/tests/test_modeling_tf_common.py#L428\r\n\r\nLooks good to me otherwise" ]
1,586
1,586
1,586
CONTRIBUTOR
null
- Rename generation_mode -> `use_cache` ### Benefits - Avoid confusion (see linked issues) - allow unit tests to instantiate once, then test `forward` and `generate`. Avoiding extra 10 second init cost. - Never accidentally have slow generation ### Costs - If a developer is changing something and wants to turn caching off, they must edit `prepare_inputs_for_generation` and pass use_cache=False. This is documented. They are a developer by construction so this cost is low. - inconsistency with other cachers like `CTRLModel`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3632/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3632", "html_url": "https://github.com/huggingface/transformers/pull/3632", "diff_url": "https://github.com/huggingface/transformers/pull/3632.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3632.patch", "merged_at": 1586300906000 }
https://api.github.com/repos/huggingface/transformers/issues/3631
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3631/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3631/comments
https://api.github.com/repos/huggingface/transformers/issues/3631/events
https://github.com/huggingface/transformers/pull/3631
593,985,419
MDExOlB1bGxSZXF1ZXN0Mzk4NjcxMTY2
3,631
Fix RoBERTa/XLNet Pad Token in run_multiple_choice.py
{ "login": "ethanjperez", "id": 6402205, "node_id": "MDQ6VXNlcjY0MDIyMDU=", "avatar_url": "https://avatars.githubusercontent.com/u/6402205?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ethanjperez", "html_url": "https://github.com/ethanjperez", "followers_url": "https://api.github.com/users/ethanjperez/followers", "following_url": "https://api.github.com/users/ethanjperez/following{/other_user}", "gists_url": "https://api.github.com/users/ethanjperez/gists{/gist_id}", "starred_url": "https://api.github.com/users/ethanjperez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ethanjperez/subscriptions", "organizations_url": "https://api.github.com/users/ethanjperez/orgs", "repos_url": "https://api.github.com/users/ethanjperez/repos", "events_url": "https://api.github.com/users/ethanjperez/events{/privacy}", "received_events_url": "https://api.github.com/users/ethanjperez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3631?src=pr&el=h1) Report\n> Merging [#3631](https://codecov.io/gh/huggingface/transformers/pull/3631?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/243e687be6cd701722cce050005a2181e78a08a8&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3631/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3631?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3631 +/- ##\n==========================================\n- Coverage 78.30% 78.28% -0.02% \n==========================================\n Files 104 104 \n Lines 17627 17627 \n==========================================\n- Hits 13802 13800 -2 \n- Misses 3825 3827 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3631?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.45% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.97% <0.00%> (-0.13%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3631?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3631?src=pr&el=footer). Last update [243e687...284fa1b](https://codecov.io/gh/huggingface/transformers/pull/3631?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,586
1,586
1,586
CONTRIBUTOR
null
`convert_examples_to_fes atures` sets `pad_token=0` by default, which is correct for BERT but incorrect for RoBERTa (`pad_token=1`) and XLNet (`pad_token=5`). I think the other arguments to `convert_examples_to_features` are correct, but it might be helpful if someone checked who is more familiar with this part of the codebase.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3631/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3631/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3631", "html_url": "https://github.com/huggingface/transformers/pull/3631", "diff_url": "https://github.com/huggingface/transformers/pull/3631.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3631.patch", "merged_at": 1586206343000 }
https://api.github.com/repos/huggingface/transformers/issues/3630
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3630/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3630/comments
https://api.github.com/repos/huggingface/transformers/issues/3630/events
https://github.com/huggingface/transformers/issues/3630
593,951,252
MDU6SXNzdWU1OTM5NTEyNTI=
3,630
How to get top 10 possible set of words to calculate Top-K accuracy and MRR?
{ "login": "hmdgit", "id": 59701320, "node_id": "MDQ6VXNlcjU5NzAxMzIw", "avatar_url": "https://avatars.githubusercontent.com/u/59701320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hmdgit", "html_url": "https://github.com/hmdgit", "followers_url": "https://api.github.com/users/hmdgit/followers", "following_url": "https://api.github.com/users/hmdgit/following{/other_user}", "gists_url": "https://api.github.com/users/hmdgit/gists{/gist_id}", "starred_url": "https://api.github.com/users/hmdgit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hmdgit/subscriptions", "organizations_url": "https://api.github.com/users/hmdgit/orgs", "repos_url": "https://api.github.com/users/hmdgit/repos", "events_url": "https://api.github.com/users/hmdgit/events{/privacy}", "received_events_url": "https://api.github.com/users/hmdgit/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I have written a function to calculate Top-k accuracy and MRR of a model, which is trained by using GPT-2. However, the function gives me very low values of Top-k accuracy and MRR. \r\n\r\nKindly let me know anything wrong in this function?\r\n\r\n```\r\ndef calculateModelPerformance(model,tokenizer):\r\n testData=readFileData('dataset/test.txt') # file contain words separated by space.\r\n step=1\r\n block_size=128\r\n top1 = 0.0\r\n top3 = 0.0\r\n top5 = 0.0\r\n top10 = 0.0\r\n mrr=0.0\r\n\ttotalIterations=0.0\r\n for i in range(0, len(testData)-block_size, step):\r\n print(\"Iteration \" + str(i+1))\r\n sequence=testData[i: i + block_size]\r\n next_word=testData[i + block_size]\r\n input_ids = torch.tensor(tokenizer.encode(sequence)).unsqueeze(0)\r\n # get logits of last predicted token\r\n next_word_logits = model(input_ids)[0][0, -1].detach()\r\n probabilities, indices = next_word_logits.topk(10)\r\n words = [tokenizer.decode(tir.item()) for tir in indices]\r\n rank = 1.0\r\n\r\n for word in words:\r\n if word == next_word:\r\n mrr += 1.0/rank\r\n if rank<=1.0:\r\n top1+=1.0\r\n if rank<=3.0:\r\n top3+=1.0\r\n if rank<=5.0:\r\n top5+=1.0\r\n if rank<=10.0:\r\n top10+=1.0\r\n print(\"MRR \", str(mrr))\r\n print(\"Top 1 \",str(top1))\r\n print(\"Top 3 \", str(top3))\r\n print(\"Top 5 \",str(top5))\r\n print(\"Top 10 \",str(top10))\r\n break\r\n rank = rank + 1.0\r\n\t\ttotalIterations +=1.0\r\n\t\t\r\n print(\"Total MRR \",str(mrr/totalIterations))\r\n print(\"Total Top-1 Accuracy \", str(top1 / totalIterations))\r\n print(\"Total Top-3 Accuracy \",str(top3/totalIterations))\r\n print(\"Total Top-5 Accuracy \", str(top5 / totalIterations))\r\n print(\"Total Top-10 Accuracy \", str(top10 / totalIterations))\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,586
1,591
1,591
NONE
null
# ❓ Questions & Help Dear All, I am a newcomer to build language model with GPT-2. I have [fine-tuned](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) a language model by using GPT-2. Now, I would like to calculate Top-k accuracy and Mean Reciprocal Rank (MRR) of my model. For this purpose, I am using following strategy to get top 10 next predicted words: 1. Get a sub-sequence of text of length such as 40, contained in a test.txt, from the start of the file. 2. Next sub-sequence is created by moving a window of 40 words to next step 1. This process will go on untill, we get a list of all sub-sequences of test.txt file. 3. Pass the sub-sequences one by one to the generated model, which should give next 10 possible set of words. For this purpose, I am using following segment of code to get top 10 words by adapting a code mentioned at this [link](https://github.com/huggingface/transformers/blob/master/examples/run_generation.py): ``` prompt_text = 'hello world' #Its an example. In reality it will be a complete subsequence of 40 words encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to('cpu') output_sequences = model.generate( input_ids=encoded_prompt, max_length=1+len(encoded_prompt[0]), top_p=0.9, do_sample=True, num_return_sequences=10 ) ``` Is it a right way to get top 10 next words on the basis of input string, which helps me in calculating Top-k accuracy and MRR of my model? Kindly let me know about your concerns.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3630/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3630/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3629
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3629/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3629/comments
https://api.github.com/repos/huggingface/transformers/issues/3629/events
https://github.com/huggingface/transformers/pull/3629
593,945,083
MDExOlB1bGxSZXF1ZXN0Mzk4NjM2MzYx
3,629
Create README.md for ktrapeznikov/scibert_scivocab_uncased_squad_v2
{ "login": "ktrapeznikov", "id": 4052002, "node_id": "MDQ6VXNlcjQwNTIwMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/4052002?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ktrapeznikov", "html_url": "https://github.com/ktrapeznikov", "followers_url": "https://api.github.com/users/ktrapeznikov/followers", "following_url": "https://api.github.com/users/ktrapeznikov/following{/other_user}", "gists_url": "https://api.github.com/users/ktrapeznikov/gists{/gist_id}", "starred_url": "https://api.github.com/users/ktrapeznikov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ktrapeznikov/subscriptions", "organizations_url": "https://api.github.com/users/ktrapeznikov/orgs", "repos_url": "https://api.github.com/users/ktrapeznikov/repos", "events_url": "https://api.github.com/users/ktrapeznikov/events{/privacy}", "received_events_url": "https://api.github.com/users/ktrapeznikov/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3629?src=pr&el=h1) Report\n> Merging [#3629](https://codecov.io/gh/huggingface/transformers/pull/3629?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/243e687be6cd701722cce050005a2181e78a08a8&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3629/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3629?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3629 +/- ##\n==========================================\n- Coverage 78.30% 78.28% -0.02% \n==========================================\n Files 104 104 \n Lines 17627 17627 \n==========================================\n- Hits 13802 13800 -2 \n- Misses 3825 3827 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3629?src=pr&el=tree) | Coverage Ξ” | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.45% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.97% <0.00%> (-0.13%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3629?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ” = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3629?src=pr&el=footer). Last update [243e687...e1252ef](https://codecov.io/gh/huggingface/transformers/pull/3629?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,586
1,586
1,586
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3629/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3629/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3629", "html_url": "https://github.com/huggingface/transformers/pull/3629", "diff_url": "https://github.com/huggingface/transformers/pull/3629.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3629.patch", "merged_at": 1586027912000 }
https://api.github.com/repos/huggingface/transformers/issues/3628
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3628/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3628/comments
https://api.github.com/repos/huggingface/transformers/issues/3628/events
https://github.com/huggingface/transformers/pull/3628
593,939,038
MDExOlB1bGxSZXF1ZXN0Mzk4NjMxODI0
3,628
Create README.md for ktrapeznikov/albert-xlarge-v2-squad-v2
{ "login": "ktrapeznikov", "id": 4052002, "node_id": "MDQ6VXNlcjQwNTIwMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/4052002?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ktrapeznikov", "html_url": "https://github.com/ktrapeznikov", "followers_url": "https://api.github.com/users/ktrapeznikov/followers", "following_url": "https://api.github.com/users/ktrapeznikov/following{/other_user}", "gists_url": "https://api.github.com/users/ktrapeznikov/gists{/gist_id}", "starred_url": "https://api.github.com/users/ktrapeznikov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ktrapeznikov/subscriptions", "organizations_url": "https://api.github.com/users/ktrapeznikov/orgs", "repos_url": "https://api.github.com/users/ktrapeznikov/repos", "events_url": "https://api.github.com/users/ktrapeznikov/events{/privacy}", "received_events_url": "https://api.github.com/users/ktrapeznikov/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,586
1,586
1,586
CONTRIBUTOR
null
adding readme for ktrapeznikov/albert-xlarge-v2-squad-v2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3628/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3628/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3628", "html_url": "https://github.com/huggingface/transformers/pull/3628", "diff_url": "https://github.com/huggingface/transformers/pull/3628.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3628.patch", "merged_at": 1586027935000 }
https://api.github.com/repos/huggingface/transformers/issues/3627
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3627/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3627/comments
https://api.github.com/repos/huggingface/transformers/issues/3627/events
https://github.com/huggingface/transformers/issues/3627
593,921,294
MDU6SXNzdWU1OTM5MjEyOTQ=
3,627
Failing to load saved TFBertModel
{ "login": "sourabhXIII", "id": 13887449, "node_id": "MDQ6VXNlcjEzODg3NDQ5", "avatar_url": "https://avatars.githubusercontent.com/u/13887449?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sourabhXIII", "html_url": "https://github.com/sourabhXIII", "followers_url": "https://api.github.com/users/sourabhXIII/followers", "following_url": "https://api.github.com/users/sourabhXIII/following{/other_user}", "gists_url": "https://api.github.com/users/sourabhXIII/gists{/gist_id}", "starred_url": "https://api.github.com/users/sourabhXIII/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sourabhXIII/subscriptions", "organizations_url": "https://api.github.com/users/sourabhXIII/orgs", "repos_url": "https://api.github.com/users/sourabhXIII/repos", "events_url": "https://api.github.com/users/sourabhXIII/events{/privacy}", "received_events_url": "https://api.github.com/users/sourabhXIII/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Facing a similar issue with tf 2.2.0-rc3.\r\n```\r\ndef get_model(lr=0.00001):\r\n inp_bert = tf.keras.layers.Input(shape=(512), dtype=\"int32\")\r\n bert = TFBertModel.from_pretrained('bert-base-multilingual-cased')(inp_bert)[0]\r\n doc_encodings = tf.squeeze(bert[:, 0:1, :], axis=1)\r\n out = tf.keras.layers.Dense(1, activation=\"sigmoid\")(doc_encodings)\r\n model = tf.keras.Model(inp_bert, out)\r\n adam = tf.keras.optimizers.Adam(lr=lr)\r\n model.compile(optimizer=adam, loss=\"binary_crossentropy\", metrics=[\"accuracy\"])\r\n return model\r\nmodel = get_model()\r\nmodel.save(\"model_name\",save_format='tf')\r\nmodel = tf.keras.models.load_model('model_name')\r\nmodel.summary()\r\n```\r\nOutput error is:\r\n\r\n```\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py in assert_same_structure(nest1, nest2, check_types, expand_composites)\r\n 383 \"Entire first structure:\\n%s\\n\"\r\n 384 \"Entire second structure:\\n%s\"\r\n--> 385 % (str(e), str1, str2))\r\n 386 \r\n 387 \r\n\r\nValueError: The two structures don't have the same nested structure.\r\n\r\nFirst structure: type=TensorSpec str=TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs')\r\n\r\nSecond structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='inputs/input_ids')}\r\n\r\nMore specifically: Substructure \"type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='inputs/input_ids')}\" is a sequence, while substructure \"type=TensorSpec str=TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs')\" is not\r\nEntire first structure:\r\n.\r\nEntire second structure:\r\n{'input_ids': .}\r\n```", "change \r\n`base_output = base_model([ids, mask, token_type_ids])` \r\nto \r\n`base_output = base_model.bert([ids, mask, token_type_ids])`\r\nshould fix\r\n", "> \r\n> \r\n> change\r\n> `base_output = base_model([ids, mask, token_type_ids])`\r\n> to\r\n> `base_output = base_model.bert([ids, mask, token_type_ids])`\r\n> should fix\r\n\r\nThanks @Souls362 .. solves it.", "This worked for me as well with `TFBertModel`, however, I run into the same issue with `TFXLNetModel`. `TFXLNetModel` doesn't seem to have an equivalent to the `.bert` property/attribute. Does anyone know how to solve this when using `TFXLNetModel`?", "For `TFXLNetModel` this would be the `.transformer` attribute, as you can see [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_xlnet.py#L1127)", "@LysandreJik thank you! That works perfectly", "@LysandreJik How about `TFOpenAIGPTLMHeadModel` ? I use `.transformer` attribute, but the output shape become `[None, None, 768]`, while the original output shape of `TFOpenAIGPTLMHeadModel` is `[None, None, 13088]`. How to solve it? Thanks a lot!", "Well the `transformer` attribute is the transformer in itself, which has a hidden size of 768. The LM head model has an additional head which is the embedding matrix of, which has a size of 13088.", "Yes, I think so too. So how can I save the whole model?", "@Souls362 you are the greatest! I looked way too long for this. \r\n@huggingface folks, please add an extra detailed example on serialization in TF2. \r\nThere is seriously some clear documentation missing there.", "> change\r\n> `base_output = base_model([ids, mask, token_type_ids])`\r\n> to\r\n> `base_output = base_model.bert([ids, mask, token_type_ids])`\r\n> should fix\r\n\r\none tip for TFBertSequenceClassification: base_model.bert([ids, mask, token_type_ids])[1]", "> > change\r\n> > `base_output = base_model([ids, mask, token_type_ids])`\r\n> > to\r\n> > `base_output = base_model.bert([ids, mask, token_type_ids])`\r\n> > should fix\r\n> \r\n> one tip for TFBertSequenceClassification: base_model.bert([ids, mask, token_type_ids])[1]\r\n\r\nWhat is the difference of 0 and 1 in the brackets?", "> > > change\r\n> > > `base_output = base_model([ids, mask, token_type_ids])`\r\n> > > to\r\n> > > `base_output = base_model.bert([ids, mask, token_type_ids])`\r\n> > > should fix\r\n> > \r\n> > \r\n> > one tip for TFBertSequenceClassification: base_model.bert([ids, mask, token_type_ids])[1]\r\n> \r\n> What is the difference of 0 and 1 in the brackets?\r\n\r\n[TFBertModel documentation](https://huggingface.co/transformers/model_doc/bert.html#transformers.TFBertModel)\r\n\r\nmodel returns sequence output and pooled output (for classification)", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "> change\r\n> `base_output = base_model([ids, mask, token_type_ids])`\r\n> to\r\n> `base_output = base_model.bert([ids, mask, token_type_ids])`\r\n> should fix\r\n\r\nbest answer ever!", "```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n/tmp/ipykernel_92500/3480800436.py in <module>\r\n----> 1 model_eval = model.evaluate(\r\n 2 dataset_test,\r\n 3 use_multiprocessing=True,\r\n 4 return_dict=True)\r\n\r\n~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in evaluate(self, x, y, batch_size, verbose, sample_weight, steps, callbacks, max_queue_size, workers, use_multiprocessing, return_dict)\r\n 1387 with trace.Trace('test', step_num=step, _r=1):\r\n 1388 callbacks.on_test_batch_begin(step)\r\n-> 1389 tmp_logs = self.test_function(iterator)\r\n 1390 if data_handler.should_sync:\r\n 1391 context.async_wait()\r\n\r\n~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)\r\n 826 tracing_count = self.experimental_get_tracing_count()\r\n 827 with trace.Trace(self._name) as tm:\r\n--> 828 result = self._call(*args, **kwds)\r\n 829 compiler = \"xla\" if self._experimental_compile else \"nonXla\"\r\n 830 new_tracing_count = self.experimental_get_tracing_count()\r\n\r\n~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)\r\n 869 # This is the first call of __call__, so we have to initialize.\r\n 870 initializers = []\r\n--> 871 self._initialize(args, kwds, add_initializers_to=initializers)\r\n 872 finally:\r\n 873 # At this point we know that the initialization is complete (or less\r\n\r\n~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)\r\n 723 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)\r\n 724 self._concrete_stateful_fn = (\r\n--> 725 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access\r\n 726 *args, **kwds))\r\n 727 \r\n\r\n~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)\r\n 2967 args, kwargs = None, None\r\n 2968 with self._lock:\r\n-> 2969 graph_function, _ = self._maybe_define_function(args, kwargs)\r\n 2970 return graph_function\r\n 2971 \r\n\r\n~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)\r\n 3359 \r\n 3360 self._function_cache.missed.add(call_context_key)\r\n-> 3361 graph_function = self._create_graph_function(args, kwargs)\r\n 3362 self._function_cache.primary[cache_key] = graph_function\r\n 3363 \r\n\r\n~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)\r\n 3194 arg_names = base_arg_names + missing_arg_names\r\n 3195 graph_function = ConcreteFunction(\r\n-> 3196 func_graph_module.func_graph_from_py_func(\r\n 3197 self._name,\r\n 3198 self._python_function,\r\n\r\n~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)\r\n 988 _, original_func = tf_decorator.unwrap(python_func)\r\n 989 \r\n--> 990 func_outputs = python_func(*func_args, **func_kwargs)\r\n 991 \r\n 992 # invariant: `func_outputs` contains only Tensors, CompositeTensors,\r\n\r\n~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)\r\n 632 xla_context.Exit()\r\n 633 else:\r\n--> 634 out = weak_wrapped_fn().__wrapped__(*args, **kwds)\r\n 635 return out\r\n 636 \r\n\r\n~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)\r\n 975 except Exception as e: # pylint:disable=broad-except\r\n 976 if hasattr(e, \"ag_error_metadata\"):\r\n--> 977 raise e.ag_error_metadata.to_exception(e)\r\n 978 else:\r\n 979 raise\r\n\r\nTypeError: in user code:\r\n\r\n /home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1233 test_function *\r\n return step_function(self, iterator)\r\n /home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1224 step_function **\r\n outputs = model.distribute_strategy.run(run_step, args=(data,))\r\n /home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1259 run\r\n return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)\r\n /home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica\r\n return self._call_for_each_replica(fn, args, kwargs)\r\n /home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica\r\n return fn(*args, **kwargs)\r\n /home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1217 run_step **\r\n outputs = model.test_step(data)\r\n /home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1188 test_step\r\n self.compiled_metrics.update_state(y, y_pred, sample_weight)\r\n /home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/keras/engine/compile_utils.py:387 update_state\r\n self.build(y_pred, y_true)\r\n /home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/keras/engine/compile_utils.py:317 build\r\n self._metrics = nest.map_structure_up_to(y_pred, self._get_metric_objects,\r\n /home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/util/nest.py:1159 map_structure_up_to\r\n return map_structure_with_tuple_paths_up_to(\r\n /home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/util/nest.py:1241 map_structure_with_tuple_paths_up_to\r\n assert_shallow_structure(\r\n /home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/util/nest.py:847 assert_shallow_structure\r\n raise TypeError(_STRUCTURES_HAVE_MISMATCHING_TYPES.format(\r\n\r\n TypeError: The two structures don't have the same sequence type. Input structure has type <class 'tuple'>, while shallow structure has type <class 'dict'>.\r\n```\r\n\r\n```python\r\ndef map_tk(X, Y):\r\n X = tokenizer(\r\n X,\r\n max_length=max_len,\r\n padding='max_length',\r\n truncation=True,\r\n return_token_type_ids=False,\r\n return_tensors=\"tf\")\r\n \r\n X = {\r\n \"input_ids\": tf.reshape(X[\"input_ids\"], [max_len]),\r\n \"attention_mask\": tf.reshape(X[\"attention_mask\"], [max_len])\r\n }\r\n Y = {\r\n \"y1\": to_categorical(Y[\"y1\"], num_classes=11), \r\n \"y2\": to_categorical(Y[\"y2\"], num_classes=4)\r\n }\r\n return X, Y\r\n\r\n\r\ndef gen_data(df: pd.DataFrame):\r\n def gen():\r\n for _, row in df.iterrows():\r\n d = {\r\n \"X\": row[\"content\"], \r\n \"Y\": {\r\n \"y1\": row[\"y1\"],\r\n \"y2\": row[\"y2\"]\r\n }\r\n }\r\n yield map_tk(d[\"X\"], d[\"Y\"])\r\n \r\n return gen\r\n\r\n\r\noutput_signature = (\r\n {\r\n \"input_ids\": tf.TensorSpec(shape=(150,), dtype=tf.int32),\r\n \"attention_mask\": tf.TensorSpec(shape=(150,), dtype=tf.int32)\r\n },\r\n {\r\n \"institution\": tf.TensorSpec(shape=(11,), dtype=tf.int32),\r\n \"laws_nature\": tf.TensorSpec(shape=(4,), dtype=tf.int32)\r\n })\r\n\r\n\r\ndef build_dataset(df: pd.DataFrame, shuffle_size=0):\r\n ds = tf.data.Dataset.from_generator(\r\n gen_data(df),\r\n output_signature=output_signature)\r\n\r\n if shuffle_size > 0:\r\n ds = ds.shuffle(buffer_size=shuffle_size)\r\n\r\n return ds.batch(batch_size=batch_size).prefetch(1)\r\n\r\n\r\ndataset_train = build_dataset(train, 25600)\r\ndataset_valid = build_dataset(valid)\r\ndataset_test = build_dataset(test)\r\n```\r\n\r\nThis is the inputs of my model, is there any workaround for `electra`?\r\n```python\r\ninput_ids = Input(shape=(max_len,), name=\"input_ids\", dtype=\"int32\")\r\nattention_mask = Input(shape=(max_len,), name=\"attention_mask\", dtype=\"int32\")\r\ninputs = {\"input_ids\": input_ids, \"attention_mask\": attention_mask}\r\n\r\nX = pretrained(inputs)[\"hidden_states\"][-3:-1]\r\n```", "I had the same problem, this link solved the problem for me -> [link](https://stackoverflow.com/questions/73557769/valueerror-unknown-layer-tfbertmodel-please-ensure-this-object-is-passed-to-t)\r\nalso I saved model with Pickle and got a problem loading that too, I couldn't solve that if anyone knows how.\r\n\r\nThanks" ]
1,586
1,695
1,610
NONE
null
TF version: 2.2.0-rc1 transformers version: 2.7.0 `import tensorflow as tf` `import transformers` `print(tf.__version__)` `print(transformers.__version__)` `MAX_LEN = 10` `model_path = 'saved_model/temp_model'` `ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32)` `mask = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32)` `token_type_ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32)` `base_model = transformers.TFBertModel.from_pretrained("bert-base-cased"` `, output_hidden_states=False)` `base_output = base_model([ids, mask, token_type_ids])` `seq_out, _ = base_output[0], base_output[1]` `base_model.trainable = False` `model = tf.keras.models.Model(inputs=[ids, mask, token_type_ids], outputs=[seq_out])` `model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])` `print(model.summary())` `model.save(model_path)` `model = tf.keras.models.load_model(model_path)` Model load fails with the following error: Traceback (most recent call last): File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/nest.py", line 378, in assert_same_structure expand_composites) TypeError: The two structures don't have the same nested structure. First structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')} Second structure: type=list str=[TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/0'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/1'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/2')] More specifically: The two namedtuples don't have the same sequence type. First structure type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')} has type dict, while second structure type=list str=[TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/0'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/1'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/2')] has type list During handling of the above exception, another exception occurred: Traceback (most recent call last): File "temp.py", line 29, in <module> model = tf.keras.models.load_model(model_path) File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py", line 190, in load_model return saved_model_load.load(filepath, compile) File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 116, in load model = tf_load.load_internal(path, loader_cls=KerasObjectLoader) File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py", line 604, in load_internal export_dir) File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 188, in __init__ super(KerasObjectLoader, self).__init__(*args, **kwargs) File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py", line 123, in __init__ self._load_all() File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 215, in _load_all self._finalize_objects() File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 506, in _finalize_objects _finalize_saved_model_layers(layers_revived_from_saved_model) File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 677, in _finalize_saved_model_layers inputs = infer_inputs_from_restored_call_function(call_fn) File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 921, in infer_inputs_from_restored_call_function spec = nest.map_structure(common_spec, spec, spec2) File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/nest.py", line 611, in map_structure expand_composites=expand_composites) File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/nest.py", line 385, in assert_same_structure % (str(e), str1, str2)) TypeError: The two structures don't have the same nested structure. First structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')} Second structure: type=list str=[TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/0'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/1'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/2')] More specifically: The two namedtuples don't have the same sequence type. First structure type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')} has type dict, while second structure type=list str=[TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/0'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/1'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/2')] has type list Entire first structure: {'input_ids': .} Entire second structure: [., ., .]
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3627/reactions", "total_count": 20, "+1": 20, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3627/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3626
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3626/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3626/comments
https://api.github.com/repos/huggingface/transformers/issues/3626/events
https://github.com/huggingface/transformers/issues/3626
593,785,903
MDU6SXNzdWU1OTM3ODU5MDM=
3,626
ValueError: You have to specify either input_ids or inputs_embeds!
{ "login": "innat", "id": 17668390, "node_id": "MDQ6VXNlcjE3NjY4Mzkw", "avatar_url": "https://avatars.githubusercontent.com/u/17668390?v=4", "gravatar_id": "", "url": "https://api.github.com/users/innat", "html_url": "https://github.com/innat", "followers_url": "https://api.github.com/users/innat/followers", "following_url": "https://api.github.com/users/innat/following{/other_user}", "gists_url": "https://api.github.com/users/innat/gists{/gist_id}", "starred_url": "https://api.github.com/users/innat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/innat/subscriptions", "organizations_url": "https://api.github.com/users/innat/orgs", "repos_url": "https://api.github.com/users/innat/repos", "events_url": "https://api.github.com/users/innat/events{/privacy}", "received_events_url": "https://api.github.com/users/innat/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hi @innat, \r\n\r\nT5 is an encoder-decoder model so you will have to provide both `input_ids` and `decoder_input_ids` to the model. Maybe taking a look at the [T5 docs](https://huggingface.co/transformers/model_doc/t5.html#transformers.T5Model.forward) (especially the \"Examples\") can help you :-) \r\n", "Just noticed that the Examples docstring for TF T5 was wrong. Is fixed with #3636 .", "@patrickvonplaten \r\nhello, sorry to bother you. Would you please justify the following piece of code:\r\n\r\n\r\n### Imports\r\n```python\r\nfrom transformers import TFAutoModel, AutoTokenizer\r\n\r\n# First load the real tokenizer\r\ntokenizer = AutoTokenizer.from_pretrained('t5-small')\r\ntransformer_layer = TFAutoModel.from_pretrained('t5-small')\r\n```\r\n\r\n### Define Encoder\r\n```python\r\ndef encode(texts, tokenizer, maxlen=512):\r\n enc_di = tokenizer.batch_encode_plus(\r\n texts, \r\n return_attention_masks=False, \r\n return_token_type_ids=False,\r\n pad_to_max_length=True,\r\n max_length=maxlen\r\n )\r\n return np.array(enc_di['input_ids'])\r\n\r\n# tokenized\r\nx_train = encode('text', tokenizer, maxlen=200)\r\ny_train\r\n```\r\n\r\n### Define Model and Call\r\n\r\n```python\r\ndef build_mod(transformer, max_len=512):\r\n input_word_ids = Input(shape=(max_len,), dtype=tf.int32, name=\"input_word_ids\")\r\n sequence_output = transformer(input_word_ids)[0]\r\n cls_token = sequence_output[:, 0, :]\r\n out = Dense(1, activation='sigmoid')(cls_token)\r\n \r\n model = Model(inputs=input_word_ids, outputs=out)\r\n model.compile(Adam(lr=1e-5), loss='binary_crossentropy', metrics=['accuracy'])\r\n\r\n return model\r\n\r\n# calling\r\nmodel = build_model(transformer_layer, max_len=200)\r\n```\r\n\r\nNow, according to the docstring, should I do,\r\n\r\n`outputs = model(input_ids=x_train, decoder_input_ids=x_train)[0]`\r\n\r\n?", "I'm not 100% sure what you want to do here exactly. T5 is always trained in a text-to-text format. We have a section here on how to train T5: https://huggingface.co/transformers/model_doc/t5.html#training\r\n\r\nOtherwise I'd recommend taking a look at the official paper.", "@patrickvonplaten Thanks for this. I encountered the same issue and this resolved it!\r\n\r\nI'm wondering if it makes sense to make the error message capture the requirement of having both `input_ids` and `decoder_input_ids` since this is an encoder-decoder model? This may make the fix clearer for users of encoder decoder models in the future.\r\n\r\nI.e., for encoded-decoder models, switch the error message from:\r\n\r\n```\r\nValueError: You have to specify either input_ids or inputs_embeds\r\n```\r\n\r\nto:\r\n\r\n```\r\nValueError: You have to specify either (input_ids and decoder_input_ids) or inputs_embeds\r\n```\r\n\r\nI can sent this as a PR as well if you think it makes sense!", "Hi @enzoampil,\r\n\r\nA PR for a cleaner Error message would be nice if you feel like it :-). It would be good if the error message could change between `ValueError: You have to specify either input_ids or inputs_embeds` if `self.is_decoder == False` and `ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds` if `self.is_decoder == True`. So adding a simple if statement to the error message is definitely a good idea!", "Got it will do. Thanks for the pointers! πŸ˜„ ", "Hi, I also got the same error when training seq2seq on tf.keras and I could not follow the example you provide on https://huggingface.co/transformers/model_doc/t5.html#training (this example is for pytorch I think)\r\n\r\nI create `x_encoder` as` input_ids` and `x_decoder_in` for `decoder_input_ids`\r\n\r\nmodel = TFT5Model.from_pretrained('t5-base')\r\nmodel.compile('adam',loss='sparse_binary_crossentropy')\r\n\r\nSo when I want to train the model I simply do \r\n`model.fit({'input_ids': x_encoder, 'decoder_input_ids': x_decoder_in})`\r\n\r\nwhere I clearly provide `input_ids` , but still got this error message : \r\n`ValueError: You have to specify either input_ids or inputs_embeds`\r\n\r\nNote that changing input from dict to list got the same error. Changing model from TFT5Model to TFT5ForConditionalGeneration got the same error. Changing loss to BCE got the same error.\r\n\r\nMoreover, changing input to only one array \r\n`model.fit({'input_ids': x_encoder})`\r\nis also error : \r\n\r\n`ValueError: No data provided for \"decoder_input_ids\". Need data for each key in: ['decoder_input_ids', 'input_ids']`", "In `class TFT5Model(TFT5PreTrainedModel):`\r\n\r\nI found this line (899-900):\r\n ``` \r\n\r\n # retrieve arguments\r\n input_ids = kwargs.get(\"inputs\", None)\r\n\r\n ```\r\nShouldn't it be `kwargs.get(\"input_ids\", None)` ??", "@ratthachat - thanks for you message! \r\nWe definitely need to provide more TF examples for the T5 Model. I want to tackle this problem in ~2 weeks. \r\n \r\nIn TF we use the naming convention `inputs`, so the you should change to `model.fit({\"inputs\": x_encoder})` . I very much agree that the error message is quite misleading and correct it in this PR: #4401. ", "Thanks for your consideration, Patrick!", "@patrickvonplaten Sorry to tag you in this old thread, but is there any official T5 TF example (as you mentioned in the last thread)?", "@ratthachat - no worries, we should definitely add more TF T5 examples and we still don't have a good TF T5 notebook. \r\nI am moving the discussion to the forum and if no one answers I will spent some time coping a T5 PT notebook to TF.", "Hi @patrickvonplaten i wanted to fine tune using T5 using TF 2.0 but its soo confusing at each end as compared to pytorch which is really well documented all current examples (community + offcial) are for pytorch. is the work for TFT5 notebook underway?", "Okey, seems like no-one has a complete TF T5 notebook. I will start working on it this week: https://discuss.huggingface.co/t/how-to-train-t5-with-tensorflow/641/6\r\n\r\nShould be done by next week sometime :-) ", "Hi @patrickvonplaten \r\nPlease help me with this error.\r\n\r\nI'm doing inference with a T5-base model which I finetuned on GLUE tasks.\r\n\r\nIt's giving error like \r\n`ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds`\r\n\r\nWhile doing inference, we just need to provide input_ids for the encoder right?\r\nWhy do we need `decoder_input_ids`?\r\n\r\nAnd as it's inference, my `labels` will also be `None`.\r\nSo, [this](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_t5.py#L1171) part will not execute.\r\n`decoder_input_ids = self._shift_right(labels)`\r\n\r\nWaiting for your reply.\r\nThank you.", "@prashant-kikani it is indeed a strange behavior. have you tried passing `input_ids` to `decoder_input_ids` like:\r\n\r\n```\r\ninput_ids = tokenizer(..., return_tensor='tf') # replace pt for pytorch\r\noutputs= model(input_ids=input_ids, decoder_input_ids=input_ids)\r\n\r\nassert len(outputs)==3, 'must return 3 tensors when inferencing'\r\n```", "Hi @HarrisDePerceptron \r\nWe can do it & it's giving some output also. But it's not the right thing to do.\r\n\r\nYou see, T5 which Transformer itself, is a text to text model.\r\nSo, it can do inference in linear time by matrix multiplication when `label` is available.\r\n\r\nBut, when label is not available, we need to go sequentially by doing forward pass in decoder for each word till `</s>` doesn't come.\r\nWe need to concatenate last output of decoder with new input if decoder each time.\r\n\r\nWhat do you think?\r\n", "@prashant-kikani @HarrisDePerceptron \r\n\r\nFor `decoder_input_ids` , we just need to put a single BOS token so that the decoder will know that this is the beginning of the output sentence. (Even in GLUE task, T5 still looks at every output label as a complete sentence )\r\n\r\nWe can see a concrete example by looking at the function \r\n`prepare_inputs_for_generation` which is called by `model.generate` \r\n(`generate` function is here : https://github.com/huggingface/transformers/blob/master/src/transformers/generation_tf_utils.py )\r\n\r\nSee line 298 in the above link : \r\n```\r\nif self.config.is_encoder_decoder:\r\n if decoder_start_token_id is None:\r\n decoder_start_token_id = bos_token_id\r\n\r\n```\r\nand line 331:\r\n```\r\n# create empty decoder_input_ids\r\n input_ids = (\r\n tf.ones(\r\n (effective_batch_size * num_beams, 1),\r\n dtype=tf.int32,\r\n )\r\n * decoder_start_token_id\r\n )\r\n```\r\n\r\nand see T5's `prepare_inputs_for_generation` which change the above `input_ids` into `decoder_input_ids` implementation at : \r\nhttps://github.com/huggingface/transformers/blob/08f534d2da47875a4b7eb1c125cfa7f0f3b79642/src/transformers/modeling_tf_t5.py#L1367", "Hi @patrickvonplaten Patrick,\r\n\r\nThanks for your great work and great comment. I mimic the process of inferencing T5 as below and I got a bug, is it possible that you could help me to advise what has happended?\r\n\r\n```py\r\nfrom transformers import AutoModel, AutoTokenizer \r\nmodel_name = \"castorini/t5-base-canard\" \r\n\r\nmodel = AutoModel.from_pretrained(model_name)\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\ncontext = '''\r\n Frank Zappa ||| Disbandment ||| What group disbanded ||| Zappa and the Mothers of Invention ||| When did they disband?\r\n'''\r\n\r\nencoded_input = tokenizer(\r\n context,\r\n padding='max_length',\r\n max_length=512,\r\n truncation=True,\r\n return_tensors=\"pt\",\r\n)\r\ndecoder_input = tokenizer(\r\n context,\r\n padding='max_length',\r\n max_length=512,\r\n truncation=True,\r\n return_tensors=\"pt\",\r\n)\r\n\r\nencoder_output = model.generate(input_ids=encoded_input[\"input_ids\"], decoder_input_ids=decoder_input[\"input_ids\"])\r\noutput = tokenizer.decode(\r\n encoder_output[0],\r\n skip_special_tokens=True\r\n)\r\noutput\r\n```\r\n\r\nI got error, though I alreadly provided ```decoder_input_ids```:\r\n\r\n```\r\nSome weights of the model checkpoint at castorini/t5-base-canard were not used when initializing T5Model: ['lm_head.weight']\r\n- This IS expected if you are initializing T5Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing T5Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nInput length of decoder_input_ids is 512, but ``max_length`` is set to 20. This can lead to unexpected behavior. You should consider increasing ``config.max_length`` or ``max_length``.\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[<ipython-input-11-b9fe12b71812>](https://localhost:8080/#) in <module>()\r\n 24 )\r\n 25 \r\n---> 26 encoder_output = model.generate(input_ids=encoded_input[\"input_ids\"], decoder_input_ids=decoder_input[\"input_ids\"])\r\n 27 output = tokenizer.decode(\r\n 28 encoder_output[0],\r\n\r\n6 frames\r\n[/usr/local/lib/python3.7/dist-packages/transformers/models/t5/modeling_t5.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 925 else:\r\n 926 err_msg_prefix = \"decoder_\" if self.is_decoder else \"\"\r\n--> 927 raise ValueError(f\"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds\")\r\n 928 \r\n 929 if inputs_embeds is None:\r\n\r\nValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds\r\n```\r\n\r\nThanks!", "Hey @dxlong2000,\r\n\r\nI'll open a new issue for this to make it more visible as I think this error happens quite often. See: https://github.com/huggingface/transformers/issues/16234", "Good issue! really helps me." ]
1,585
1,689
1,586
NONE
null
## Details I'm quite new to NLP task. However, I was trying to train the T5-large model and set things as follows. But unfortunately, I've got an error. ```python def build_model(transformer, max_len=512): input_word_ids = Input(shape=(max_len,), dtype=tf.int32, name="input_word_ids") sequence_output = transformer(input_word_ids)[0] cls_token = sequence_output[:, 0, :] out = Dense(1, activation='sigmoid')(cls_token) model = Model(inputs=input_word_ids, outputs=out) return model model = build_model(transformer_layer, max_len=MAX_LEN) ``` It thorws ``` ValueError: in converted code: ValueError Traceback (most recent call last) <ipython-input-19-8ad6e68cd3f5> in <module> ----> 5 model = build_model(transformer_layer, max_len=MAX_LEN) 6 7 model.summary() <ipython-input-17-e001ed832ed6> in build_model(transformer, max_len) 31 """ 32 input_word_ids = Input(shape=(max_len,), dtype=tf.int32, name="input_word_ids") ---> 33 sequence_output = transformer(input_word_ids)[0] 34 cls_token = sequence_output[:, 0, :] 35 out = Dense(1, activation='sigmoid')(cls_token) ValueError: You have to specify either input_ids or inputs_embeds ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3626/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3626/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3625
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3625/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3625/comments
https://api.github.com/repos/huggingface/transformers/issues/3625/events
https://github.com/huggingface/transformers/issues/3625
593,771,876
MDU6SXNzdWU1OTM3NzE4NzY=
3,625
how can i run gpt2 model on tf serving ?
{ "login": "yiyele", "id": 20697201, "node_id": "MDQ6VXNlcjIwNjk3MjAx", "avatar_url": "https://avatars.githubusercontent.com/u/20697201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yiyele", "html_url": "https://github.com/yiyele", "followers_url": "https://api.github.com/users/yiyele/followers", "following_url": "https://api.github.com/users/yiyele/following{/other_user}", "gists_url": "https://api.github.com/users/yiyele/gists{/gist_id}", "starred_url": "https://api.github.com/users/yiyele/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yiyele/subscriptions", "organizations_url": "https://api.github.com/users/yiyele/orgs", "repos_url": "https://api.github.com/users/yiyele/repos", "events_url": "https://api.github.com/users/yiyele/events{/privacy}", "received_events_url": "https://api.github.com/users/yiyele/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,585
1,619
1,619
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3625/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3625/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3624
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3624/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3624/comments
https://api.github.com/repos/huggingface/transformers/issues/3624/events
https://github.com/huggingface/transformers/issues/3624
593,671,282
MDU6SXNzdWU1OTM2NzEyODI=
3,624
Add code to pretrain T5 model from scratch
{ "login": "LiweiPeng", "id": 8562078, "node_id": "MDQ6VXNlcjg1NjIwNzg=", "avatar_url": "https://avatars.githubusercontent.com/u/8562078?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LiweiPeng", "html_url": "https://github.com/LiweiPeng", "followers_url": "https://api.github.com/users/LiweiPeng/followers", "following_url": "https://api.github.com/users/LiweiPeng/following{/other_user}", "gists_url": "https://api.github.com/users/LiweiPeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/LiweiPeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LiweiPeng/subscriptions", "organizations_url": "https://api.github.com/users/LiweiPeng/orgs", "repos_url": "https://api.github.com/users/LiweiPeng/repos", "events_url": "https://api.github.com/users/LiweiPeng/events{/privacy}", "received_events_url": "https://api.github.com/users/LiweiPeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "@patrickvonplaten Can we pre-train T5 from scratch on any task? I want to use it for Question Answering.", "Many notebooks for T5 are now added to the community notebooks :-) ", "@patrickvonplaten can you share the notebook which show T5 pre-training if it is available ?", "@patrickvonplaten as of today, I didn't find any notebook that is related to T5 *pretraining* in the [community notebooks collection ](https://huggingface.co/transformers/master/community.html#community-notebooks). Could you elaborate more on where there is a codebase to do the pretraining? Thanks!", "> @patrickvonplaten as of today, I didn't find any notebook that is related to T5 pretraining in the community notebooks collection . Could you elaborate more on where there is a codebase to do the pretraining? Thanks!\r\n\r\nYes I agree there is no guide for pretraining\r\n" ]
1,585
1,624
1,591
NONE
null
# πŸš€ Feature request The T5 model can significantly improve NLP task accuracies. However, the existing pretrained models are all in English. I'd like to pretrain T5 model on different language datasets from scratch. Can you add code on pretraining T5 model? Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3624/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3624/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3623
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3623/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3623/comments
https://api.github.com/repos/huggingface/transformers/issues/3623/events
https://github.com/huggingface/transformers/pull/3623
593,667,082
MDExOlB1bGxSZXF1ZXN0Mzk4NDA4MjQw
3,623
Create model card
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "I forgot to add the language in the header:\r\n```\r\n---\r\nlanguage: english\r\nthumbnail: \r\n---\r\n```", "Looks quite cool! Also cc'ing @lvwerra\r\n\r\nThanks for sharing πŸ™", "Very nice - never tested it for negative feedback :)" ]
1,585
1,586
1,586
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3623/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3623", "html_url": "https://github.com/huggingface/transformers/pull/3623", "diff_url": "https://github.com/huggingface/transformers/pull/3623.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3623.patch", "merged_at": 1586002835000 }
https://api.github.com/repos/huggingface/transformers/issues/3622
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3622/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3622/comments
https://api.github.com/repos/huggingface/transformers/issues/3622/events
https://github.com/huggingface/transformers/issues/3622
593,603,215
MDU6SXNzdWU1OTM2MDMyMTU=
3,622
default of weight_decay for run_language_modeling.py
{ "login": "mahdirezaey", "id": 34715488, "node_id": "MDQ6VXNlcjM0NzE1NDg4", "avatar_url": "https://avatars.githubusercontent.com/u/34715488?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mahdirezaey", "html_url": "https://github.com/mahdirezaey", "followers_url": "https://api.github.com/users/mahdirezaey/followers", "following_url": "https://api.github.com/users/mahdirezaey/following{/other_user}", "gists_url": "https://api.github.com/users/mahdirezaey/gists{/gist_id}", "starred_url": "https://api.github.com/users/mahdirezaey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mahdirezaey/subscriptions", "organizations_url": "https://api.github.com/users/mahdirezaey/orgs", "repos_url": "https://api.github.com/users/mahdirezaey/repos", "events_url": "https://api.github.com/users/mahdirezaey/events{/privacy}", "received_events_url": "https://api.github.com/users/mahdirezaey/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi \r\n\r\nthe default value of weight_decay is \"0\" for run_language_modeling.py , why is that ?\r\n\r\nshouldn't it be 0.01 according to original paper of BERT ?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,585
1,591
1,591
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3622/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3621
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3621/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3621/comments
https://api.github.com/repos/huggingface/transformers/issues/3621/events
https://github.com/huggingface/transformers/pull/3621
593,597,411
MDExOlB1bGxSZXF1ZXN0Mzk4MzUwNDAw
3,621
fix prepare_for_tokenization in tokenization_roberta.py
{ "login": "boy2000-007man", "id": 4197489, "node_id": "MDQ6VXNlcjQxOTc0ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/4197489?v=4", "gravatar_id": "", "url": "https://api.github.com/users/boy2000-007man", "html_url": "https://github.com/boy2000-007man", "followers_url": "https://api.github.com/users/boy2000-007man/followers", "following_url": "https://api.github.com/users/boy2000-007man/following{/other_user}", "gists_url": "https://api.github.com/users/boy2000-007man/gists{/gist_id}", "starred_url": "https://api.github.com/users/boy2000-007man/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/boy2000-007man/subscriptions", "organizations_url": "https://api.github.com/users/boy2000-007man/orgs", "repos_url": "https://api.github.com/users/boy2000-007man/repos", "events_url": "https://api.github.com/users/boy2000-007man/events{/privacy}", "received_events_url": "https://api.github.com/users/boy2000-007man/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,585
1,594
1,594
CONTRIBUTOR
null
fix the corner case break run_glue.py with QQP task mentioned in #3608
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3621/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3621/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3621", "html_url": "https://github.com/huggingface/transformers/pull/3621", "diff_url": "https://github.com/huggingface/transformers/pull/3621.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3621.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/3620
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3620/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3620/comments
https://api.github.com/repos/huggingface/transformers/issues/3620/events
https://github.com/huggingface/transformers/pull/3620
593,558,391
MDExOlB1bGxSZXF1ZXN0Mzk4MzE4NDQ5
3,620
Update notebooks
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,585
1,586
1,586
MEMBER
null
Update the notebooks in the documentation
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3620/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3620/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3620", "html_url": "https://github.com/huggingface/transformers/pull/3620", "diff_url": "https://github.com/huggingface/transformers/pull/3620.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3620.patch", "merged_at": 1586197960000 }
https://api.github.com/repos/huggingface/transformers/issues/3619
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3619/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3619/comments
https://api.github.com/repos/huggingface/transformers/issues/3619/events
https://github.com/huggingface/transformers/issues/3619
593,437,072
MDU6SXNzdWU1OTM0MzcwNzI=
3,619
Feature Request: Fill Mask more than 1 token
{ "login": "p-christ", "id": 26346243, "node_id": "MDQ6VXNlcjI2MzQ2MjQz", "avatar_url": "https://avatars.githubusercontent.com/u/26346243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/p-christ", "html_url": "https://github.com/p-christ", "followers_url": "https://api.github.com/users/p-christ/followers", "following_url": "https://api.github.com/users/p-christ/following{/other_user}", "gists_url": "https://api.github.com/users/p-christ/gists{/gist_id}", "starred_url": "https://api.github.com/users/p-christ/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/p-christ/subscriptions", "organizations_url": "https://api.github.com/users/p-christ/orgs", "repos_url": "https://api.github.com/users/p-christ/repos", "events_url": "https://api.github.com/users/p-christ/events{/privacy}", "received_events_url": "https://api.github.com/users/p-christ/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Duplicate of #3609 \r\n\r\nNot currently supported but we welcome a PR" ]
1,585
1,585
1,585
NONE
null
At the moment you can use hugging face's mask filling pipeline to predict 1 masked token in a sentence using the below: ``` !pip install -q transformers from __future__ import print_function import ipywidgets as widgets from transformers import pipeline nlp_fill = pipeline('fill-mask') nlp_fill("I am going to guess <mask> in this sentence") ``` The request is that you also add the ability to predict N masked tokens rather than only 1 masked token. So that for example if the sentence is `"I am going to make <mask> <mask> for breakfast"` then the model might predict "fried eggs" for the 2 masked tokens
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3619/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3619/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3618
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3618/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3618/comments
https://api.github.com/repos/huggingface/transformers/issues/3618/events
https://github.com/huggingface/transformers/pull/3618
593,399,562
MDExOlB1bGxSZXF1ZXN0Mzk4MTg3OTM5
3,618
Update German Bert model card
{ "login": "Timoeller", "id": 3264870, "node_id": "MDQ6VXNlcjMyNjQ4NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/3264870?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Timoeller", "html_url": "https://github.com/Timoeller", "followers_url": "https://api.github.com/users/Timoeller/followers", "following_url": "https://api.github.com/users/Timoeller/following{/other_user}", "gists_url": "https://api.github.com/users/Timoeller/gists{/gist_id}", "starred_url": "https://api.github.com/users/Timoeller/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Timoeller/subscriptions", "organizations_url": "https://api.github.com/users/Timoeller/orgs", "repos_url": "https://api.github.com/users/Timoeller/repos", "events_url": "https://api.github.com/users/Timoeller/events{/privacy}", "received_events_url": "https://api.github.com/users/Timoeller/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "@Timoeller @tholor LGTM and thanks for linking to the discussion on the FARM repo. πŸ‘", "cherry-picked only the relevant change in 4ab8ab4f50baf391612cbc78cfa3f09b7ad0c3ac" ]
1,585
1,586
1,586
CONTRIBUTOR
null
We changed the vocab to work with run_split_on_punc tokenization. Now there are much less [UNK] punctuation tokens. For more details see deepset-ai/FARM/issues/60
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3618/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3618", "html_url": "https://github.com/huggingface/transformers/pull/3618", "diff_url": "https://github.com/huggingface/transformers/pull/3618.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3618.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/3617
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3617/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3617/comments
https://api.github.com/repos/huggingface/transformers/issues/3617/events
https://github.com/huggingface/transformers/issues/3617
593,373,968
MDU6SXNzdWU1OTMzNzM5Njg=
3,617
Choosing between adding frequent out of vocab words and doing further pretraining.
{ "login": "PieterDujardin", "id": 48496355, "node_id": "MDQ6VXNlcjQ4NDk2MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/48496355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PieterDujardin", "html_url": "https://github.com/PieterDujardin", "followers_url": "https://api.github.com/users/PieterDujardin/followers", "following_url": "https://api.github.com/users/PieterDujardin/following{/other_user}", "gists_url": "https://api.github.com/users/PieterDujardin/gists{/gist_id}", "starred_url": "https://api.github.com/users/PieterDujardin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PieterDujardin/subscriptions", "organizations_url": "https://api.github.com/users/PieterDujardin/orgs", "repos_url": "https://api.github.com/users/PieterDujardin/repos", "events_url": "https://api.github.com/users/PieterDujardin/events{/privacy}", "received_events_url": "https://api.github.com/users/PieterDujardin/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,585
1,591
1,591
NONE
null
I have a dutch medical dataset (for Namen Entity Recognition) which contains a lot of domain-specific words. The dutch BERT tokenizer therefor outputs a lot of [UNK] tokens when it tokenizes. Given that I dispose over a corpus of 60k labelled tokens, and right now I have also a relatively small annotated corpus of 185k tokens, would it be best to: - just add the most frequent out of vocab words to the vocab of the tokenizer - start from a BERT checkpoint and do further pretraining on the unlabeled dataset (which is now of size 185k which is pretty small I assume..). There might be a possibility for me to obtain a much larger unannotated dataset of potentially millions of (unlabelled) tokens, but I was wondering if even millions of tokens is enough to do some meaningful further pretraining? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3617/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3617/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3616
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3616/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3616/comments
https://api.github.com/repos/huggingface/transformers/issues/3616/events
https://github.com/huggingface/transformers/issues/3616
593,314,716
MDU6SXNzdWU1OTMzMTQ3MTY=
3,616
Out of memory error while training GPT2-large on 8x32GB Nvidia Volta
{ "login": "timsoraro", "id": 61194445, "node_id": "MDQ6VXNlcjYxMTk0NDQ1", "avatar_url": "https://avatars.githubusercontent.com/u/61194445?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timsoraro", "html_url": "https://github.com/timsoraro", "followers_url": "https://api.github.com/users/timsoraro/followers", "following_url": "https://api.github.com/users/timsoraro/following{/other_user}", "gists_url": "https://api.github.com/users/timsoraro/gists{/gist_id}", "starred_url": "https://api.github.com/users/timsoraro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timsoraro/subscriptions", "organizations_url": "https://api.github.com/users/timsoraro/orgs", "repos_url": "https://api.github.com/users/timsoraro/repos", "events_url": "https://api.github.com/users/timsoraro/events{/privacy}", "received_events_url": "https://api.github.com/users/timsoraro/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 2107554019, "node_id": "MDU6TGFiZWwyMTA3NTU0MDE5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Distributed%20Training%20/%20Models", "name": "Distributed Training / Models", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Someone's help, please?\r\n\r\nA block_size of 900 works, but I need to use 1024. Is there gradient checkpointing maybe?", "I managed to train on block_size of 950 using the latest build of pytorch supported by NVIDIA: https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel_20-03.html#rel_20-03", "We are about to add gradient checkpointing, see here: https://github.com/huggingface/transformers/pull/4659, but I'm very unsure if it works well for distributed training...we might have to assign different modules to different devices as suggested here: https://github.com/huggingface/transformers/pull/3578", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,585
1,596
1,596
NONE
null
# πŸ› Bug I'm getting an `out-of-memory error` while trianing `gpt2-large` using `batch_size=1`. I'm using the [examples/run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) script. I'm using a custom dataset with varied length examples, maximum `block_size` is 1024. This is the command I'm using: ``` python -m torch.distributed.launch --nproc_per_node 8 run_language_modeling.py --output_dir=./output_attention_mask_padding/ --model_type=gpt2 --model_name_or_path=gpt2-large --do_train --train_data_file=./data/training.txt --line_by_line --per_gpu_train_batch_size 1 --num_train_epochs 3 --fp16 ``` I tried changing `args.gradient_accumulation_steps` but to no success. Here's the traceback: ```python Traceback (most recent call last): | 9/213 [00:45<09:51, 2.90s/it] File "run_language_modeling.py", line 988, in <module> main() File "run_language_modeling.py", line 938, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_language_modeling.py", line 506, in train outputs = model(inputs, masked_lm_labels=labels, attention_mask=attention_mask) if args.mlm else model(inputs, labels=labels, attention_mask=attention_mask) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 442, in forward output = self.module(*inputs[0], **kwargs[0]) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/home/deepspeed/.local/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 612, in forward loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1)) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 916, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "/usr/local/lib/python3.6/dist-packages/apex/amp/wrap.py", line 27, in wrapper kwargs) File "/usr/local/lib/python3.6/dist-packages/apex/amp/utils.py", line 78, in casted_args new_args.append(cast_fn(x)) File "/usr/local/lib/python3.6/dist-packages/apex/amp/utils.py", line 71, in maybe_float return x.float() RuntimeError: CUDA out of memory. Tried to allocate 190.00 MiB (GPU 2; 31.72 GiB total capacity; 28.71 GiB already allocated; 135.88 MiB free; 1.66 GiB cached) Traceback (most recent call last): File "run_language_modeling.py", line 988, in <module> main() File "run_language_modeling.py", line 938, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_language_modeling.py", line 523, in train scaled_loss.backward() File "/usr/local/lib/python3.6/dist-packages/torch/tensor.py", line 118, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py", line 93, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: CUDA out of memory. Tried to allocate 194.00 MiB (GPU 4; 31.72 GiB total capacity; 29.42 GiB already allocated; 155.88 MiB free; 951.73 MiB cached) ``` ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.6.0 - Platform: Linux - Using distributed or parallel set-up in script?: Yes
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3616/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3616/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3615
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3615/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3615/comments
https://api.github.com/repos/huggingface/transformers/issues/3615/events
https://github.com/huggingface/transformers/issues/3615
593,303,129
MDU6SXNzdWU1OTMzMDMxMjk=
3,615
Mismatch of loss shape in document with output of TransfoXL
{ "login": "TobiasLee", "id": 20009381, "node_id": "MDQ6VXNlcjIwMDA5Mzgx", "avatar_url": "https://avatars.githubusercontent.com/u/20009381?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TobiasLee", "html_url": "https://github.com/TobiasLee", "followers_url": "https://api.github.com/users/TobiasLee/followers", "following_url": "https://api.github.com/users/TobiasLee/following{/other_user}", "gists_url": "https://api.github.com/users/TobiasLee/gists{/gist_id}", "starred_url": "https://api.github.com/users/TobiasLee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TobiasLee/subscriptions", "organizations_url": "https://api.github.com/users/TobiasLee/orgs", "repos_url": "https://api.github.com/users/TobiasLee/repos", "events_url": "https://api.github.com/users/TobiasLee/events{/privacy}", "received_events_url": "https://api.github.com/users/TobiasLee/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "The output shape looks correct to me \r\n\"loss (:obj:`torch.FloatTensor` of shape `(batch_size, sequence_length)`\" means that \r\n`outputs[0]` should be of shape ` [batch_size, sequence_length] ` and it is ` [1, 2] `, so that is correct no?", "yes, since the document was updated via #3661" ]
1,585
1,586
1,586
CONTRIBUTOR
null
# πŸ› Bug ## Information Model: Transformer-XL Language: English The problem arises when using: a small demo of transformer-xl output ## To reproduce Steps to reproduce the behavior: ```python3 model_name = 'transfo-xl-wt103' model = AutoModelWithLMHead.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) sentence = 'Hello world' tokenize_input = tokenizer.tokenize(sentence) tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)]) outputs = model(tensor_input, labels=tensor_input) print(outputs[0].size()) ``` run the code above ## Expected behavior The output[0] is supposed to be a tensor shaped as `(1,)`, as described in the [source code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_transfo_xl.py#L862), however, the actual shape is (1, 2) (bsz, sequence_len). ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.7.0 - Platform: Ubuntu 18.04 - Python version: 3.6 - PyTorch version (GPU?): 1.4.0, with 1080Ti - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3615/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3614
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3614/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3614/comments
https://api.github.com/repos/huggingface/transformers/issues/3614/events
https://github.com/huggingface/transformers/issues/3614
593,295,506
MDU6SXNzdWU1OTMyOTU1MDY=
3,614
The tensorflow implementation of T5ForConditionalGeneration runs much slower than the pytorch one. GPU utilization is 30%
{ "login": "dshaprin", "id": 6575031, "node_id": "MDQ6VXNlcjY1NzUwMzE=", "avatar_url": "https://avatars.githubusercontent.com/u/6575031?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dshaprin", "html_url": "https://github.com/dshaprin", "followers_url": "https://api.github.com/users/dshaprin/followers", "following_url": "https://api.github.com/users/dshaprin/following{/other_user}", "gists_url": "https://api.github.com/users/dshaprin/gists{/gist_id}", "starred_url": "https://api.github.com/users/dshaprin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dshaprin/subscriptions", "organizations_url": "https://api.github.com/users/dshaprin/orgs", "repos_url": "https://api.github.com/users/dshaprin/repos", "events_url": "https://api.github.com/users/dshaprin/events{/privacy}", "received_events_url": "https://api.github.com/users/dshaprin/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "I want to take a closer look in a week or so at this. This issue seems to be related: https://github.com/huggingface/transformers/issues/4634", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "The problem is likely because of `generate()` not being compatible with `tf.function`. I want to take a look at this in more detail while working on this PR: https://github.com/huggingface/transformers/pull/5662", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,585
1,602
1,602
NONE
null
# πŸ› Bug When I am running the official example in examples/summarization/t5/example on PyTorch, I have much better performance than the Tensorflow one. When running on PyTorch it needs 4s per iteration and uses 100% of the GPU. When running the TensorFlow model it needs 30s per iteration and the GPU utilization is 15-20%. ## Information Model I am using: t5-small Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: (give the name): CNN Dailymail * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Follow the steps in the official examples/summarization/t5/example 2. Use the modified evaluate_cnn.py script provided below ``` import argparse from pathlib import Path from tqdm import tqdm from rouge_score import rouge_scorer, scoring from transformers import TFT5ForConditionalGeneration, T5Tokenizer def chunks(lst, n): """Yield successive n-sized chunks from lst.""" for i in range(0, len(lst), n): yield lst[i : i + n] def generate_summaries(lns, output_file_path, model_size, batch_size): output_file = Path(output_file_path).open("w") model = TFT5ForConditionalGeneration.from_pretrained(model_size) tokenizer = T5Tokenizer.from_pretrained(model_size) # update config with summarization specific params task_specific_params = model.config.task_specific_params if task_specific_params is not None: model.config.update(task_specific_params.get("summarization", {})) for batch in tqdm(list(chunks(lns, batch_size))): batch = [model.config.prefix + text for text in batch] dct = tokenizer.batch_encode_plus(batch, max_length=512, return_tensors="tf", pad_to_max_length=True) input_ids = dct["input_ids"]#.to(device) attention_mask = dct["attention_mask"]#.to(device) summaries = model.generate(input_ids=input_ids, attention_mask=attention_mask) dec = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summaries] for hypothesis in dec: output_file.write(hypothesis + "\n") output_file.flush() def calculate_rouge(output_lns, reference_lns, score_path): score_file = Path(score_path).open("w") scorer = rouge_scorer.RougeScorer(["rouge1", "rouge2", "rougeL"], use_stemmer=True) aggregator = scoring.BootstrapAggregator() for reference_ln, output_ln in zip(reference_lns, output_lns): scores = scorer.score(reference_ln, output_ln) aggregator.add_scores(scores) result = aggregator.aggregate() score_file.write( "ROUGE_1: \n{} \n\n ROUGE_2: \n{} \n\n ROUGE_L: \n{} \n\n".format( result["rouge1"], result["rouge2"], result["rougeL"] ) ) def run_generate(): parser = argparse.ArgumentParser() parser.add_argument( "model_size", type=str, help="T5 model size, either 't5-small', 't5-base', 't5-large', 't5-3b', 't5-11b'. Defaults to 't5-base'.", default="t5-base", ) parser.add_argument( "input_path", type=str, help="like cnn_dm/test_articles_input.txt", ) parser.add_argument( "output_path", type=str, help="where to save summaries", ) parser.add_argument("reference_path", type=str, help="like cnn_dm/test_reference_summaries.txt") parser.add_argument( "score_path", type=str, help="where to save the rouge score", ) parser.add_argument( "--batch_size", type=int, default=8, required=False, help="batch size: how many to summarize at a time", ) parser.add_argument( "--no_cuda", default=False, type=bool, help="Whether to force the execution on CPU.", ) args = parser.parse_args() # args.device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu") source_lns = [x.rstrip() for x in open(args.input_path).readlines()] generate_summaries(source_lns, args.output_path, args.model_size, args.batch_size) output_lns = [x.rstrip() for x in open(args.output_path).readlines()] reference_lns = [x.rstrip() for x in open(args.reference_path).readlines()] calculate_rouge(output_lns, reference_lns, args.score_path) if __name__ == "__main__": run_generate() ``` ## Expected behavior The Tensorflow code should work with similar performance as the PyTorch one ## Environment info absl-py 0.9.0 astor 0.7.1 attrs 19.3.0 blinker 1.4 boto3 1.12.34 botocore 1.15.34 cachetools 3.1.1 certifi 2019.11.28 cffi 1.14.0 chardet 3.0.4 click 7.1.1 cryptography 2.8 dill 0.3.1.1 docutils 0.15.2 filelock 3.0.12 future 0.18.2 gast 0.2.2 google-auth 1.12.0 google-auth-oauthlib 0.4.1 google-pasta 0.2.0 googleapis-common-protos 1.51.0 grpcio 1.27.2 h5py 2.10.0 idna 2.9 jmespath 0.9.5 joblib 0.14.1 Keras-Applications 1.0.8 Keras-Preprocessing 1.1.0 Markdown 3.2.1 nltk 3.4.5 numpy 1.18.1 oauthlib 3.0.1 opt-einsum 3.2.0 pip 20.0.2 promise 2.3 protobuf 3.11.4 pyasn1 0.4.8 pyasn1-modules 0.2.7 pycparser 2.20 PyJWT 1.7.1 pyOpenSSL 19.1.0 PySocks 1.7.1 python-dateutil 2.8.1 regex 2020.2.20 requests 2.23.0 requests-oauthlib 1.2.0 rouge-score 0.0.3 rsa 4.0 s3transfer 0.3.3 sacremoses 0.0.38 scipy 1.4.1 sentencepiece 0.1.85 setuptools 46.1.3.post20200325 six 1.14.0 tensorboard 2.1.0 tensorflow 2.1.0 tensorflow-datasets 2.1.0 tensorflow-estimator 2.1.0 tensorflow-gpu 2.1.0 tensorflow-metadata 0.21.1 termcolor 1.1.0 tokenizers 0.5.2 torch 1.4.0 tqdm 4.45.0 transformers 2.7.0 urllib3 1.25.7 Werkzeug 1.0.1 wheel 0.34.2 wrapt 1.12.1 - `transformers` version: 2.70, 2.7.1, and builded from 81484b447b7d8504ff5e1cfff38ec35918383963 - Platform: Ubuntu Ubuntu 18.04.4 LTS - Python version: 3.7.6 - PyTorch version (GPU?): - Tensorflow version (GPU?):2.1.0 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?:No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3614/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3614/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/3613
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3613/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3613/comments
https://api.github.com/repos/huggingface/transformers/issues/3613/events
https://github.com/huggingface/transformers/pull/3613
593,290,070
MDExOlB1bGxSZXF1ZXN0Mzk4MDk4NzQw
3,613
Added albert-base-bahasa-cased README and fixed tiny-bert-bahasa-cased README
{ "login": "huseinzol05", "id": 19810909, "node_id": "MDQ6VXNlcjE5ODEwOTA5", "avatar_url": "https://avatars.githubusercontent.com/u/19810909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/huseinzol05", "html_url": "https://github.com/huseinzol05", "followers_url": "https://api.github.com/users/huseinzol05/followers", "following_url": "https://api.github.com/users/huseinzol05/following{/other_user}", "gists_url": "https://api.github.com/users/huseinzol05/gists{/gist_id}", "starred_url": "https://api.github.com/users/huseinzol05/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/huseinzol05/subscriptions", "organizations_url": "https://api.github.com/users/huseinzol05/orgs", "repos_url": "https://api.github.com/users/huseinzol05/repos", "events_url": "https://api.github.com/users/huseinzol05/events{/privacy}", "received_events_url": "https://api.github.com/users/huseinzol05/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,585
1,585
1,585
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3613/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/3613", "html_url": "https://github.com/huggingface/transformers/pull/3613", "diff_url": "https://github.com/huggingface/transformers/pull/3613.diff", "patch_url": "https://github.com/huggingface/transformers/pull/3613.patch", "merged_at": 1585920524000 }