url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/5920 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5920/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5920/comments | https://api.github.com/repos/huggingface/transformers/issues/5920/events | https://github.com/huggingface/transformers/pull/5920 | 662,100,311 | MDExOlB1bGxSZXF1ZXN0NDUzNjkyMzI0 | 5,920 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5920?src=pr&el=h1) Report\n> Merging [#5920](https://codecov.io/gh/huggingface/transformers/pull/5920?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/32883b310ba30d72e67bb2ebb5847888f03a90a8&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5920?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5920 +/- ##\n=======================================\n Coverage 78.51% 78.51% \n=======================================\n Files 146 146 \n Lines 26214 26214 \n=======================================\n Hits 20583 20583 \n Misses 5631 5631 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5920?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5920/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5920/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (ø)` | |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5920/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5920?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5920?src=pr&el=footer). Last update [32883b3...2579206](https://codecov.io/gh/huggingface/transformers/pull/5920?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5920/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5920",
"html_url": "https://github.com/huggingface/transformers/pull/5920",
"diff_url": "https://github.com/huggingface/transformers/pull/5920.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5920.patch",
"merged_at": 1595316703000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5919 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5919/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5919/comments | https://api.github.com/repos/huggingface/transformers/issues/5919/events | https://github.com/huggingface/transformers/pull/5919 | 662,095,613 | MDExOlB1bGxSZXF1ZXN0NDUzNjg4MjQ3 | 5,919 | [examples/seq2seq]: add --label_smoothing option | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5919?src=pr&el=h1) Report\n> Merging [#5919](https://codecov.io/gh/huggingface/transformers/pull/5919?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4781afd045b4722e7f28347f1c4f42a56a4550e8&el=desc) will **decrease** coverage by `0.17%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5919?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5919 +/- ##\n==========================================\n- Coverage 78.69% 78.51% -0.18% \n==========================================\n Files 146 146 \n Lines 26214 26214 \n==========================================\n- Hits 20628 20581 -47 \n- Misses 5586 5633 +47 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5919?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5919?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5919?src=pr&el=footer). Last update [4781afd...303f0ac](https://codecov.io/gh/huggingface/transformers/pull/5919?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Gunna merge this.\r\nThe packed dataset is definitely a win, label smoothing less clear.\r\nTODO: figure out loss function mystery."
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | cc @patil-suraj
This seems to improve BLEU score by ~2pts!
- also adds --early_stopping_patience command line arg
- fixes MBartDataset src,tgt flipping bug
- adds `--early_stopping_patience` command line arg for PL.
- wandb now looks for shell variable `$WANDB_PROJECT_NAME` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5919/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5919/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5919",
"html_url": "https://github.com/huggingface/transformers/pull/5919",
"diff_url": "https://github.com/huggingface/transformers/pull/5919.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5919.patch",
"merged_at": 1595364700000
} |
https://api.github.com/repos/huggingface/transformers/issues/5918 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5918/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5918/comments | https://api.github.com/repos/huggingface/transformers/issues/5918/events | https://github.com/huggingface/transformers/issues/5918 | 662,030,418 | MDU6SXNzdWU2NjIwMzA0MTg= | 5,918 | Add Fast Transformers - Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention | {
"login": "bratao",
"id": 1090152,
"node_id": "MDQ6VXNlcjEwOTAxNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1090152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bratao",
"html_url": "https://github.com/bratao",
"followers_url": "https://api.github.com/users/bratao/followers",
"following_url": "https://api.github.com/users/bratao/following{/other_user}",
"gists_url": "https://api.github.com/users/bratao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bratao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bratao/subscriptions",
"organizations_url": "https://api.github.com/users/bratao/orgs",
"repos_url": "https://api.github.com/users/bratao/repos",
"events_url": "https://api.github.com/users/bratao/events{/privacy}",
"received_events_url": "https://api.github.com/users/bratao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi guys, let us know how we can help and also kindly add @apoorv2904 to the author list.\r\n\r\nAlthough the model weights are nothing particularly useful we do provide them for our colab so let us know if they are needed and how to provide them.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Don´t let it die. In my tests this is the best performing model so far!",
"@patrickvonplaten @sgugger \r\n\r\nI could try to include on huggingface/transformers if there is an interest from the core team. But I would have to depend on https://github.com/idiap/fast-transformers as they created optimized cuda/cpu c++ versions of the proposed attention. A MR with this dependency would be accepted by Huggingface? ",
"would love if this comes in!",
"Hey @bratao,\r\n\r\nYes, we would definitely be interested in this model and would also be fine with an optional dependency of `https://github.com/idiap/fast-transformers` Also pinging @joeddav @TevenLeScao here (in case you guys are interested in helping with the integration). \r\n\r\nI would also be happy to help you with the model integration otherwise @bratao :-) ",
"Great, I´m on it @patrickvonplaten \r\n\r\nI will work on this on my free time, As soon as I have something, I put it here the fork.\r\n\r\nIf anyone else want to help or speed it up, just talk to me using the email in my profile!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"What happened to this model? It was not finally integrated right? :( @bratao @patrickvonplaten "
] | 1,595 | 1,616 | 1,609 | NONE | null | # 🌟 New model addition
## Model description
The Fast Transformers repo introduces a fast transformer model based on work to improve attention published in two papers:
- Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention (https://arxiv.org/abs/2006.16236)
- Fast Transformers with Clustered Attention (https://arxiv.org/abs/2007.04825)
## Open source status
* [X] the model implementation is available: (give details)
https://github.com/idiap/fast-transformers
* [x] the model weights are available: (give details)
* [X] who are the authors: (mention them, if possible by @gh-username)
@angeloskath | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5918/reactions",
"total_count": 13,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/huggingface/transformers/issues/5918/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5917 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5917/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5917/comments | https://api.github.com/repos/huggingface/transformers/issues/5917/events | https://github.com/huggingface/transformers/issues/5917 | 662,029,138 | MDU6SXNzdWU2NjIwMjkxMzg= | 5,917 | convert_roberta: AttributeError when converting CamemBERT model.pt to pytorch_model.bin | {
"login": "LilianBordeau",
"id": 24193358,
"node_id": "MDQ6VXNlcjI0MTkzMzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/24193358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LilianBordeau",
"html_url": "https://github.com/LilianBordeau",
"followers_url": "https://api.github.com/users/LilianBordeau/followers",
"following_url": "https://api.github.com/users/LilianBordeau/following{/other_user}",
"gists_url": "https://api.github.com/users/LilianBordeau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LilianBordeau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LilianBordeau/subscriptions",
"organizations_url": "https://api.github.com/users/LilianBordeau/orgs",
"repos_url": "https://api.github.com/users/LilianBordeau/repos",
"events_url": "https://api.github.com/users/LilianBordeau/events{/privacy}",
"received_events_url": "https://api.github.com/users/LilianBordeau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"maybe @sshleifer has an idea",
"I can help! \r\nOn fairseq master (as of 5/28/20), that class seems to no longer have a `decoder` attribute.\r\n\r\nI think you want to change the `roberta.model.decoder` references to `self.model.encoder`, but hard to know without seeing the `state_dict`/handling the model interactively.\r\n\r\nThe best way to debug is to either instantiate the fairseq model in jupyter/ipython or set a breakpoint and see what the attributes are.\r\n\r\nIf you are stuck, feel free to upload your `model.pt` to some cloud storage and I can give it a shot!\r\n\r\n",
"Hey @sshleifer!\r\n\r\nI did what you suggested and it worked, thanks a lot. You have to replace all the references to `roberta.model.decoder` with `roberta.model.encoder` as the attributes were just renamed.\r\n\r\nOn the other hand, I can't figure out what happened to `roberta.args.num_classes` that is used for the classification_head flag, which makes it useless for now.\r\n\r\nI would gladly commit the fix but I'm not a powergit user, so I'll leave it to the pros.\r\n\r\nThanks again!\r\n\r\n***\r\n\r\nEdit : the error that comes up with the flag. \r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"converter.py\", line 179, in <module>\r\n args.roberta_checkpoint_path, args.pytorch_dump_folder_path, args.classification_head\r\n File \"converter.py\", line 62, in convert_roberta_checkpoint_to_pytorch\r\n config.num_labels = roberta.args.num_classes\r\nAttributeError: 'Namespace' object has no attribute 'num_classes'\r\n```\r\n",
"I think `num_classes` will be like something like\r\n\r\n`roberta.model.classification_heads[some_key].out_proj.weight.shape[0]`\r\nThere is likely only one possible key.",
"I just checked with a model fine-tuned on MNLI and the key is classification_heads['mnli'], is this what you expected?",
"sounds right! Don't lose that head!",
"Hey @sshleifer,\r\nSorry if that's not the right place to ask but I couldn't find an answer to that question anywhere: is there a script like this one to convert a model.pt trained on gpu to a model.bin ? or should this script works both for cpu and gpu models ?\r\nThanks!",
"Did our library save the `model.pt`?\r\n\r\nThe filenames don't really matter if the contents of the file are a `state_dict`.\r\nSo it may be as simple as, from the terminal,\r\n```bash\r\nmv model.pt pytorch_model.bin\r\n```\r\nThe library doesn't care if a `state_dict` was saved on gpu or cpu.\r\n\r\nIf that fails try to run `torch.load('model.pt', map_location='cpu')` in an interactive environment and see if it's a state dict.\r\n\r\n",
"Thanks for the answer @sshleifer! HuggingFace was working well, it was my nvidia apex installation that was broken and returned errors in fast-bert that confused me. All works well now!"
] | 1,595 | 1,596 | 1,596 | NONE | null | Hi,
I trained a CamemBERT model with the fairseq library which gave me the following files:
- dict.txt: vocabulary coming from the sentencepiece model
- sentencepiece.bpe.model
- model.pt
Now I am trying to convert the model.pt into pytorch_model.bin and config.json as mentionned here ([fairseq/issues#1514](https://github.com/pytorch/fairseq/issues/1514)) and here ([transformers/issue#1850](https://github.com/huggingface/transformers/issues/1850)), by using the conversion script of the transformers library ([transfomers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py)). The goal is to use those files with fast-bert.
However, using this command line:
```shell
python convert_roberta_original_pytorch_checkpoint_to_pytorch.py --roberta_checkpoint_path ./ --pytorch_dump_folder_path ./ --classification_head
```
I get the following error:
```python
AttributeError Traceback (most recent call last)
<ipython-input-27-ea791887ff26> in <module>
----> 1 convert_roberta_original_pytorch_checkpoint_to_pytorch.convert_roberta_checkpoint_to_pytorch(CAMEMBERT_PATH, CAMEMBERT_PATH, True)
~/anaconda3/envs/NLP/lib/python3.7/site-packages/transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py in convert_roberta_checkpoint_to_pytorch(roberta_checkpoint_path, pytorch_dump_folder_path, classification_head)
48 roberta = FairseqRobertaModel.from_pretrained(roberta_checkpoint_path)
49 roberta.eval() # disable dropout
---> 50 roberta_sent_encoder = roberta.model.decoder.sentence_encoder
51 config = RobertaConfig(
52 vocab_size=roberta_sent_encoder.embed_tokens.num_embeddings,
~/anaconda3/envs/NLP/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
592 return modules[name]
593 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 594 type(self).__name__, name))
595
596 def __setattr__(self, name, value):
AttributeError: 'RobertaModel' object has no attribute 'decoder'
```
And indeed when I check the fairseq/pytorch RobertaModel has no decoder attribute.
Am I doing this wrong ? I see no other conversion script to fit my CamemBERT model so I guess the RoBERTa one is the good one.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5917/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5916 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5916/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5916/comments | https://api.github.com/repos/huggingface/transformers/issues/5916/events | https://github.com/huggingface/transformers/pull/5916 | 662,021,414 | MDExOlB1bGxSZXF1ZXN0NDUzNjIyNTkx | 5,916 | Clarify arg class | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"👍 "
] | 1,595 | 1,595 | 1,595 | COLLABORATOR | null | Just clarifying which dataset we're talking about. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5916/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5916",
"html_url": "https://github.com/huggingface/transformers/pull/5916",
"diff_url": "https://github.com/huggingface/transformers/pull/5916.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5916.patch",
"merged_at": 1595288827000
} |
https://api.github.com/repos/huggingface/transformers/issues/5915 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5915/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5915/comments | https://api.github.com/repos/huggingface/transformers/issues/5915/events | https://github.com/huggingface/transformers/issues/5915 | 661,994,213 | MDU6SXNzdWU2NjE5OTQyMTM= | 5,915 | Incompatible tensor type when running BART on TPU | {
"login": "marton-avrios",
"id": 59836119,
"node_id": "MDQ6VXNlcjU5ODM2MTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/59836119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marton-avrios",
"html_url": "https://github.com/marton-avrios",
"followers_url": "https://api.github.com/users/marton-avrios/followers",
"following_url": "https://api.github.com/users/marton-avrios/following{/other_user}",
"gists_url": "https://api.github.com/users/marton-avrios/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marton-avrios/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marton-avrios/subscriptions",
"organizations_url": "https://api.github.com/users/marton-avrios/orgs",
"repos_url": "https://api.github.com/users/marton-avrios/repos",
"events_url": "https://api.github.com/users/marton-avrios/events{/privacy}",
"received_events_url": "https://api.github.com/users/marton-avrios/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Let's consolidate the discussion to #5895 .\r\nDefinitely an issue!"
] | 1,595 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): `facebook/bart-large`
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: XSUM
* [ ] my own task or dataset: (give details below)
## To reproduce
1. Setup a Google VM with the XLA image and configure it to use TPUs
2. Follow the instrcutions in `seq2seq` example for downloading XSUM
3. Then run
```
export PYTHONPATH="../":"${PYTHONPATH}"
python finetune.py \
--learning_rate=3e-5 \
--gpus 0 \
--n_tpu_cores 8 \
--do_train \
--do_predict \
--n_val 1000 \
--val_check_interval 0.1 \
--data_dir ${PWD}/xsum \
--train_batch_size=1 \
--eval_batch_size=1 \
--output_dir=xsum_results \
--num_train_epochs 1 \
--model_name_or_path facebook/bart-large
```
...and you get something like
```
Exception in device=TPU:5: Attempted to call `variable.set_data(tensor)`, but `variable` and `tensor` have incompatible tensor type.
Traceback (most recent call last):
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 222, in tpu_train
self.run_pretrain_routine(model)
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1196, in run_pretrain_routine
False)
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 293, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 470, in evaluation_forward
output = model.validation_step(*args)
File "/home/martongyorgy/transformers/examples/seq2seq/finetune.py", line 145, in validation_step
return self._generative_step(batch)
File "/home/martongyorgy/transformers/examples/seq2seq/finetune.py", line 176, in _generative_step
decoder_start_token_id=self.decoder_start_token_id,
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/transformers/generation_utils.py", line 248, in generate
if self.get_output_embeddings() is None:
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/transformers/modeling_bart.py", line 1113, in get_output_embeddings
return _make_linear_from_emb(self.model.shared) # make it on the fly
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/transformers/modeling_bart.py", line 190, in _make_linear_from_emb
lin_layer.weight.data = emb.weight.data
RuntimeError: Attempted to call `variable.set_data(tensor)`, but `variable` and `tensor` have incompatible tensor type.
```
## Environment info
```
- `transformers` version: 3.0.2
- Platform: Linux-4.9.0-12-amd64-x86_64-with-debian-9.12
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.0a0+ab660ae (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: TPU setup following Google Cloud tutorial for PyTorch
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5915/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5914 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5914/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5914/comments | https://api.github.com/repos/huggingface/transformers/issues/5914/events | https://github.com/huggingface/transformers/pull/5914 | 661,992,925 | MDExOlB1bGxSZXF1ZXN0NDUzNTk3NzQw | 5,914 | Add AlbertForPretraining to doc | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5914?src=pr&el=h1) Report\n> Merging [#5914](https://codecov.io/gh/huggingface/transformers/pull/5914?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f19751117d54a4dd677c614f6e400a7ee49b3f24&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5914?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5914 +/- ##\n==========================================\n+ Coverage 78.49% 78.51% +0.02% \n==========================================\n Files 146 146 \n Lines 26214 26214 \n==========================================\n+ Hits 20577 20583 +6 \n+ Misses 5637 5631 -6 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5914?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+1.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5914?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5914?src=pr&el=footer). Last update [f197511...8901dff](https://codecov.io/gh/huggingface/transformers/pull/5914?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | COLLABORATOR | null | Document models that were absent. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5914/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5914",
"html_url": "https://github.com/huggingface/transformers/pull/5914",
"diff_url": "https://github.com/huggingface/transformers/pull/5914.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5914.patch",
"merged_at": 1595282002000
} |
https://api.github.com/repos/huggingface/transformers/issues/5913 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5913/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5913/comments | https://api.github.com/repos/huggingface/transformers/issues/5913/events | https://github.com/huggingface/transformers/pull/5913 | 661,978,721 | MDExOlB1bGxSZXF1ZXN0NDUzNTg1MzEx | 5,913 | [Fix] seq2seq pack_dataset.py actually packs | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5913?src=pr&el=h1) Report\n> :exclamation: No coverage uploaded for pull request base (`master@f197511`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5913?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5913 +/- ##\n=========================================\n Coverage ? 78.69% \n=========================================\n Files ? 146 \n Lines ? 26214 \n Branches ? 0 \n=========================================\n Hits ? 20629 \n Misses ? 5585 \n Partials ? 0 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5913?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5913?src=pr&el=footer). Last update [f197511...547c2ad](https://codecov.io/gh/huggingface/transformers/pull/5913?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Added stronger test (that failed before small code fixes).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5913/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5913",
"html_url": "https://github.com/huggingface/transformers/pull/5913",
"diff_url": "https://github.com/huggingface/transformers/pull/5913.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5913.patch",
"merged_at": 1595272706000
} |
https://api.github.com/repos/huggingface/transformers/issues/5912 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5912/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5912/comments | https://api.github.com/repos/huggingface/transformers/issues/5912/events | https://github.com/huggingface/transformers/pull/5912 | 661,942,363 | MDExOlB1bGxSZXF1ZXN0NDUzNTUzMjYx | 5,912 | Improve doc of use_cache | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5912?src=pr&el=h1) Report\n> :exclamation: No coverage uploaded for pull request base (`master@f197511`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5912?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5912 +/- ##\n=========================================\n Coverage ? 78.46% \n=========================================\n Files ? 146 \n Lines ? 26214 \n Branches ? 0 \n=========================================\n Hits ? 20569 \n Misses ? 5645 \n Partials ? 0 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5912?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `94.33% <ø> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5912?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5912?src=pr&el=footer). Last update [f197511...b3c3a0e](https://codecov.io/gh/huggingface/transformers/pull/5912?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | COLLABORATOR | null | Followup from #5883 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5912/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5912",
"html_url": "https://github.com/huggingface/transformers/pull/5912",
"diff_url": "https://github.com/huggingface/transformers/pull/5912.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5912.patch",
"merged_at": 1595260241000
} |
https://api.github.com/repos/huggingface/transformers/issues/5911 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5911/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5911/comments | https://api.github.com/repos/huggingface/transformers/issues/5911/events | https://github.com/huggingface/transformers/pull/5911 | 661,878,880 | MDExOlB1bGxSZXF1ZXN0NDUzNDk3NjE0 | 5,911 | [WIP] Add Pegasus | {
"login": "JingqingZ",
"id": 6067093,
"node_id": "MDQ6VXNlcjYwNjcwOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6067093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JingqingZ",
"html_url": "https://github.com/JingqingZ",
"followers_url": "https://api.github.com/users/JingqingZ/followers",
"following_url": "https://api.github.com/users/JingqingZ/following{/other_user}",
"gists_url": "https://api.github.com/users/JingqingZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JingqingZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JingqingZ/subscriptions",
"organizations_url": "https://api.github.com/users/JingqingZ/orgs",
"repos_url": "https://api.github.com/users/JingqingZ/repos",
"events_url": "https://api.github.com/users/JingqingZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/JingqingZ/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1841528858,
"node_id": "MDU6TGFiZWwxODQxNTI4ODU4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Summarization",
"name": "Summarization",
"color": "b6f97f",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @sshleifer, I will continue to push code to this PR and make it runnable asap.",
"@sshleifer The code has been uploaded. (1) The test is runnable in TF2 and loads AESLC checkpoints successfully with correct outputs. (2) Code of models and layers (including decoding, beam search) are all in a single file which may look messy (sorry). (3) Most code is simply copied and pasted (then converted to TF2) from the original PEGASUS repo so you may refer to the original repo for clearer code if necessary. \r\n\r\nI think you can start from here. Please let me know if I can help further.",
"@JingqingZ I'm gunna add torch first in #6340 (you will be a PR co-author). And then come back here to finish TF. No action needed from you, just an update.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,602 | 1,602 | CONTRIBUTOR | null | * Add PEGASUS in TF2 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5911/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5911",
"html_url": "https://github.com/huggingface/transformers/pull/5911",
"diff_url": "https://github.com/huggingface/transformers/pull/5911.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5911.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5910 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5910/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5910/comments | https://api.github.com/repos/huggingface/transformers/issues/5910/events | https://github.com/huggingface/transformers/issues/5910 | 661,769,786 | MDU6SXNzdWU2NjE3Njk3ODY= | 5,910 | QA Pipeline: Key Error due to predicting a token in question | {
"login": "brandenchan",
"id": 33759007,
"node_id": "MDQ6VXNlcjMzNzU5MDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/33759007?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brandenchan",
"html_url": "https://github.com/brandenchan",
"followers_url": "https://api.github.com/users/brandenchan/followers",
"following_url": "https://api.github.com/users/brandenchan/following{/other_user}",
"gists_url": "https://api.github.com/users/brandenchan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brandenchan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brandenchan/subscriptions",
"organizations_url": "https://api.github.com/users/brandenchan/orgs",
"repos_url": "https://api.github.com/users/brandenchan/repos",
"events_url": "https://api.github.com/users/brandenchan/events{/privacy}",
"received_events_url": "https://api.github.com/users/brandenchan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"I also have this -- and related issues on v3.0.0, 3.0.2\r\n\r\non v3.0.0 I get Key Error for sequence terminal tokens -- I think this may be to do with removing special tokens but not masking their positions or due to a truncation error -- may be truncating one too few?\r\non v3.0.2 I get Key Error for tokens at the beginning of the sequence (Key Error : 0) -- may be an issue to do with the [CLS] token and/or tokens falling in the question span.\r\n\r\nI think the trick is to use the attention_mask * token_type_ids * start/end_scores. This will set the logits for all tokens outside the answer to 0 and can be done easily on batch tensors/GPU. I will see if I can put together a pull request.\r\n\r\nPlatform: Mac OS Catalina, GCP linux cuda 10.1\r\nPython version: 3.6.8\r\nPyTorch version (GPU?): 1.5.1, GPU\r\nUsing GPU in script?: Yes\r\nUsing distributed or parallel set-up in script?: No",
"@brandenchan @jusjosgra thanks for reporting the issue and the steps to reproduce.\r\n\r\nWe did have an issue but it should have been fixed on master branch.\r\nWhen running the snippet you provided on master I get the following: \r\n\r\n`{'score': 0.05008925125002861, 'start': 679, 'end': 697, 'answer': 'Lord Eddard Stark,'}`\r\n\r\nIf you can checkout the master branch and give it a try to make sure it works on your side too.\r\n\r\n_If everything work as expected: I need to checkout with the team when we can do a maintenance release._",
"I still have an error. It appears to be for an example where start and end are both predicted as 0 (a null answer).\r\nIn this case the valid results have been filtered to document characters only and so an index for 0 doesnt exist in feature.token_to_orig_map (the first index in my instance is 12, 0 doesnt exist).\r\n\r\nSo there needs to be a method to handle when the predicted span occurs outside the filtered feature dict. You could return a null object since these cases represent no answer or you could return the best answer found inside the valid span (i.e. mask the logits for non document tokens when getting the max value).",
"Here is one solution:\r\n\r\noriginal code\r\n```python\r\n # Normalize logits and spans to retrieve the answer\r\n start_ = np.exp(start_ - np.log(np.sum(np.exp(start_), axis=-1, keepdims=True)))\r\n end_ = np.exp(end_ - np.log(np.sum(np.exp(end_), axis=-1, keepdims=True)))\r\n\r\n if kwargs[\"handle_impossible_answer\"]:\r\n min_null_score = min(min_null_score, (start_[0] * end_[0]).item())\r\n\r\n starts, ends, scores = self.decode(start_, end_, kwargs[\"topk\"], kwargs[\"max_answer_len\"])\r\n char_to_word = np.array(example.char_to_word_offset)\r\n\r\n # Convert the answer (tokens) back to the original text\r\n answers += [\r\n {\r\n \"score\": score.item(),\r\n \"start\": np.where(char_to_word == feature.token_to_orig_map[s])[0][0].item(),\r\n \"end\": np.where(char_to_word == feature.token_to_orig_map[e])[0][-1].item(),\r\n \"answer\": \" \".join(\r\n example.doc_tokens[feature.token_to_orig_map[s] : feature.token_to_orig_map[e] + 1] \r\n ),\r\n }\r\n for s, e, score in zip(starts, ends, scores)\r\n```\r\nmy update\r\n```python\r\n # Normalize logits and spans to retrieve the answer\r\n start_ = np.exp(start_ - np.log(np.sum(np.exp(start_), axis=-1, keepdims=True)))\r\n end_ = np.exp(end_ - np.log(np.sum(np.exp(end_), axis=-1, keepdims=True)))\r\n\r\n if kwargs[\"handle_impossible_answer\"]:\r\n min_null_score = min(min_null_score, (start_[0] * end_[0]).item())\r\n\r\n starts, ends, scores = self.decode(start_, end_, kwargs[\"topk\"], kwargs[\"max_answer_len\"])\r\n char_to_word = np.array(example.char_to_word_offset)\r\n\r\n # Convert the answer (tokens) back to the original text\r\n answers += [\r\n {\r\n \"score\": score.item(),\r\n \"start\": np.where(char_to_word == feature.token_to_orig_map[s])[0][0].item(),\r\n \"end\": np.where(char_to_word == feature.token_to_orig_map[e])[0][-1].item(),\r\n \"answer\": \" \".join(\r\n example.doc_tokens[feature.token_to_orig_map[s] : feature.token_to_orig_map[e] + 1]\r\n ),\r\n }\r\n if s in feature.token_to_orig_map and e in feature.token_to_orig_map # this condition handles the case when answer spans are outside the valid token range.\r\n else {\"score\": min_null_score, \"start\": 0, \"end\": 0, \"answer\": \"\"}\r\n for s, e, score in zip(starts, ends, scores)\r\n```\r\n\r\npersonally I would rather get the best valid span (max over the masked logits) rather than an error/null answer. This might be a more useful use of \"handle impossible answer\". Returning null answers might be the best default behaviour and \"best valid span\" might be a good alternative although this would involve a significant refactor of decode to mask the logits appropriately.",
"I think there is another bug in the decode function (although I may be misunderstanding).\r\nYou compute negative log likelihoods as probabilities but in order to mask items you set them to 0. These items need to be set to a high negative number (e.g. -99) as valid values span zero.\r\nfor example:\r\n```python\r\ndef decode(...):\r\n\r\n...\r\n start_, end_ = (\r\n start_ - np.abs(-99 * np.array(feature.p_mask)),\r\n end_ - np.abs(-99 * np.array(feature.p_mask)),\r\n )\r\n\r\n # Mask CLS\r\n start_[0] = end_[0] = -99.\r\n```",
"@mfuntowicz \r\nGot the same key error zero issue. Above code fixed it",
"@jusjosgra Thanks for the provided solution here, do you want to submit a PR with your fix? Tag myself and @LysandreJik as reviewers and we'll merge it into master. \r\n\r\nOtherwise I'll do 😄.\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,604 | 1,604 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model: deepset/roberta-base-squad2
Language: English
The problem arises when using: QA inference via pipeline
This seems to be a very similar issue to #5711
The pipeline throws an exception when the model predicts a token that is not part of the document, in this case it seems to be in the question.
In the example below, the model predicts token 3 to be the start and end of the answer span. But these tokens are a part of the question I believe. Therefore, we get a key error when trying to access
feature.token_to_orig_map[3]) in here:
https://github.com/huggingface/transformers/blob/ce374ba87767d551f720242d5e64bfa976531079/src/transformers/pipelines.py#L1370-L1380
## To reproduce
```
nlp = pipeline("question-answering",model="deepset/roberta-base-squad2",
tokenizer="deepset/roberta-base-squad2",
device=-1)
nlp(question="Who is the father of Sansa Stark?", context="===''A Game of Thrones''===\
Sansa Stark begins the novel by being betrothed to Crown Prince Joffrey Baratheon, believing Joffrey to be a gallant prince. While Joffrey and Sansa are walking through the woods, Joffrey notices Arya sparring with the butcher's boy, Mycah. A fight breaks out and Joffrey is attacked by Nymeria (Arya's direwolf) after Joffrey threatens to hurt Arya. Sansa lies to King Robert about the circumstances of the fight in order to protect both Joffrey and her sister Arya. Since Arya ran off with her wolf to save it, Sansa's wolf is killed instead, estranging the Stark daughters.\
During the Tourney of the Hand to honour her father Lord Eddard Stark, Sansa Stark is enchanted by the knights performing in the event. At the request of his mother, Queen Cersei Lannister, Joffrey spends a portion of the tourney with Sansa, but near the end he commands his guard Sandor Clegane, better known as The Hound, to take her back to her quarters. Sandor explains how his older brother, Gregor, aka "Mountain that Rides" pushed his face into a brazier of hot coals, for playing with one of his wooden toys.\
After Eddard discovers the truth of Joffrey's paternity, he tells Sansa that they will be heading back to Winterfell. Sansa is devastated and wishes to stay in King's Landing, so she runs off to inform Queen Cersei of her father's plans, unwittingly providing Cersei with the information needed to arrest her father. After Robert dies, Sansa begs Joffrey to show mercy on her father and he agrees, if Ned will swear an oath of loyalty, but executes him anyway, in front of Sansa. Sansa is now effectively a hostage in King's Landing and finally sees Joffrey's true nature, after he forces her to look at the tarred head of her now-deceased father.")
```
results in
```
Traceback (most recent call last):
File "/Users/deepset/deepset/haystack/tutorials/Tutorial1_Basic_QA_Pipeline.py", line 145, in <module>
prediction = finder.get_answers(question="Who is the father of Sansa Stark?", top_k_retriever=1, top_k_reader=5)
File "/Users/deepset/deepset/haystack/haystack/finder.py", line 57, in get_answers
top_k=top_k_reader) # type: Dict[str, Any]
File "/Users/deepset/deepset/haystack/haystack/reader/transformers.py", line 80, in predict
predictions = self.model(query, topk=self.n_best_per_passage)
File "/Users/deepset/deepset/environments/haystack/lib/python3.7/site-packages/transformers/pipelines.py", line 1316, in __call__
for s, e, score in zip(starts, ends, scores)
File "/Users/deepset/deepset/environments/haystack/lib/python3.7/site-packages/transformers/pipelines.py", line 1316, in <listcomp>
for s, e, score in zip(starts, ends, scores)
KeyError: 3
```
## Expected behavior
Predictions that are pointing to tokens that are not part of the "context" (here: tokens in question) should be filtered out from possible answers.
## Environment info
transformers version: latest master (82dd96cae74797be0c1d330566df7f929214b278)
Platform: Mac OS Catalina
Python version: 3.7.5
PyTorch version (GPU?): 1.5.1, CPU
Using GPU in script?: No
Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5910/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5910/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5909 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5909/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5909/comments | https://api.github.com/repos/huggingface/transformers/issues/5909/events | https://github.com/huggingface/transformers/pull/5909 | 661,722,337 | MDExOlB1bGxSZXF1ZXN0NDUzMzYxNDIx | 5,909 | Make Tokenizers Faster When There Are Many Additional Special Tokens | {
"login": "gonglinyuan",
"id": 9744170,
"node_id": "MDQ6VXNlcjk3NDQxNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9744170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gonglinyuan",
"html_url": "https://github.com/gonglinyuan",
"followers_url": "https://api.github.com/users/gonglinyuan/followers",
"following_url": "https://api.github.com/users/gonglinyuan/following{/other_user}",
"gists_url": "https://api.github.com/users/gonglinyuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gonglinyuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gonglinyuan/subscriptions",
"organizations_url": "https://api.github.com/users/gonglinyuan/orgs",
"repos_url": "https://api.github.com/users/gonglinyuan/repos",
"events_url": "https://api.github.com/users/gonglinyuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/gonglinyuan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5909?src=pr&el=h1) Report\n> Merging [#5909](https://codecov.io/gh/huggingface/transformers/pull/5909?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/82dd96cae74797be0c1d330566df7f929214b278&el=desc) will **decrease** coverage by `0.13%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5909?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5909 +/- ##\n==========================================\n- Coverage 78.49% 78.35% -0.14% \n==========================================\n Files 146 146 \n Lines 26210 26211 +1 \n==========================================\n- Hits 20573 20538 -35 \n- Misses 5637 5673 +36 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5909?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5909?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5909?src=pr&el=footer). Last update [82dd96c...c90f92b](https://codecov.io/gh/huggingface/transformers/pull/5909?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | CONTRIBUTOR | null | `PreTrainedTokenizer.unique_no_split_tokens` used to be a list that contains all special tokens. During tokenization, the tokenizer will repeatedly check `if sub_text not in self.unique_no_split_tokens` or `if token not in self.unique_no_split_tokens`. List lookups will significantly slow down tokenization if the list is large, i.e., there are many additional special tokens added to `unique_no_split_tokens`. To resolve this issue, this commit will change `PreTrainedTokenizer.unique_no_split_tokens` to be an ordered dict (actually an ordered set, since all values are `None`), such that lookups can be done very efficiently while still keeping its original ordering. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5909/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5909",
"html_url": "https://github.com/huggingface/transformers/pull/5909",
"diff_url": "https://github.com/huggingface/transformers/pull/5909.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5909.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5908 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5908/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5908/comments | https://api.github.com/repos/huggingface/transformers/issues/5908/events | https://github.com/huggingface/transformers/issues/5908 | 661,704,434 | MDU6SXNzdWU2NjE3MDQ0MzQ= | 5,908 | ImportError: cannot import name 'DataCollatorForPermutationLanguageModeling' | {
"login": "krannnn",
"id": 66248879,
"node_id": "MDQ6VXNlcjY2MjQ4ODc5",
"avatar_url": "https://avatars.githubusercontent.com/u/66248879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krannnn",
"html_url": "https://github.com/krannnn",
"followers_url": "https://api.github.com/users/krannnn/followers",
"following_url": "https://api.github.com/users/krannnn/following{/other_user}",
"gists_url": "https://api.github.com/users/krannnn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krannnn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krannnn/subscriptions",
"organizations_url": "https://api.github.com/users/krannnn/orgs",
"repos_url": "https://api.github.com/users/krannnn/repos",
"events_url": "https://api.github.com/users/krannnn/events{/privacy}",
"received_events_url": "https://api.github.com/users/krannnn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @krannnn , `DataCollatorForPermutationLanguageModeling` is added after 3.0, you will need to install from source if you want to run examples",
"Hi @patil-suraj , out of curiosity, how do you install it? What do you mean by you will need to install from source?"
] | 1,595 | 1,597 | 1,595 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Language I am using the model on English
The problem arises when using:
* [ ] the official example scripts: [`run_language_modeling.py`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py).
The tasks I am working on is:
* [ ] Continue Training XLNet on domain-specific dataset / finetuning XLNet LM
## To reproduce
Steps to reproduce the behavior:
1. Install transformers 3.0
2. run the following command as mentioned in the readme file from examples :
```
python run_language_modeling.py \
--output_dir=output \
--model_type=xlnet \
--model_name_or_path=xlnet-base-cased \
--do_train \
--train_data_file=$TRAIN_FILE
```
Error message :
```
2020-07-20 10:47:29.463663: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
File "run_language_modeling.py", line 29, in <module>
from transformers import (
ImportError: cannot import name 'DataCollatorForPermutationLanguageModeling'
```
## Expected behavior
I expect not to have this import error since I'm using the latest release of the library
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0
- Platform: Google Colab
- Python version: Python 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101
- Tensorflow version (GPU?): 2.2.0
- Using GPU in script?: Tesla K80
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5908/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5907 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5907/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5907/comments | https://api.github.com/repos/huggingface/transformers/issues/5907/events | https://github.com/huggingface/transformers/issues/5907 | 661,657,555 | MDU6SXNzdWU2NjE2NTc1NTU= | 5,907 | ModuleNotFoundError: No module named 'torch_xla' | {
"login": "vyaslkv",
"id": 33617789,
"node_id": "MDQ6VXNlcjMzNjE3Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/33617789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vyaslkv",
"html_url": "https://github.com/vyaslkv",
"followers_url": "https://api.github.com/users/vyaslkv/followers",
"following_url": "https://api.github.com/users/vyaslkv/following{/other_user}",
"gists_url": "https://api.github.com/users/vyaslkv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vyaslkv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vyaslkv/subscriptions",
"organizations_url": "https://api.github.com/users/vyaslkv/orgs",
"repos_url": "https://api.github.com/users/vyaslkv/repos",
"events_url": "https://api.github.com/users/vyaslkv/events{/privacy}",
"received_events_url": "https://api.github.com/users/vyaslkv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @vyaslkv , what is your transformers version ? I tried this from master branch and it worked.",
"'3.0.2' Thanks for the quick reply, which version worked for you",
"I also tried with the master branch by uninstalling the transformers and then using the repo\r\n",
"worked!! Thanks @patil-suraj :)",
"can you give me an example how to use this a short one",
"<img width=\"1138\" alt=\"Screenshot 2020-07-20 at 7 07 12 PM\" src=\"https://user-images.githubusercontent.com/33617789/87943880-46e07900-cabc-11ea-85e1-c98e8462ac0f.png\">\r\n",
"ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds\r\n",
"You should use `.generate` method for generation. \r\n```python3\r\nmodel.generate(input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'])\r\n```\r\n\r\npinging @mrm8488 for exact `generate` arguments.",
"<img width=\"1166\" alt=\"Screenshot 2020-07-20 at 8 55 41 PM\" src=\"https://user-images.githubusercontent.com/33617789/87955457-68952c80-cacb-11ea-889d-95de35eb4cc0.png\">\r\n",
"is this correct? generating the text out of it the tokenizer decode part?",
"I will write a in the model card the exact arguments to use it ASAP and post it here.",
"Also @vyaslkv it would nice if you post code instead of screenshot so we can copy paste and try the code faster ;)",
"```sh\r\ngit clone https://github.com/huggingface/transformers.git\r\npip install ./transformers\r\n```\r\n```python\r\nfrom transformers import AutoModelWithLMHead, AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"mrm8488/t5-base-finetuned-wikiSQL-sql-to-en\")\r\nmodel = AutoModelWithLMHead.from_pretrained(\"mrm8488/t5-base-finetuned-wikiSQL-sql-to-en\")\r\n\r\ndef get_explanation(query):\r\n input_text = \"translante Sql to English: %s </s>\" % query\r\n features = tokenizer([input_text], return_tensors='pt')\r\n\r\n output = model.generate(input_ids=features['input_ids'], \r\n attention_mask=features['attention_mask'])\r\n \r\n return tokenizer.decode(output[0])\r\n\r\nquery = \"SELECT COUNT Params form model where location=HF-Hub\"\r\n\r\nget_explanation(query)\r\n```",
"@mrm8488 can you also make something like nlp to sql",
"@mrm8488 it doesn't work for longer queries or is there any particular format I should give",
"> @mrm8488 can you also make something like nlp to sql\r\n\r\nI already did it. ",
"> @mrm8488 it doesn't work for longer queries or is there any particular format I should give\r\n\r\nThe max number of f tokens is 128 but I am currently working on the 256 version",
"@mrm8488 can you send me the link of nlp to sql",
"https://huggingface.co/mrm8488/t5-base-finetuned-wikiSQL-sql-to-en",
"Model card: https://github.com/huggingface/transformers/commit/61e8be9940096ce763872c8d1479965511d0b472",
"@mrm8488 I think this is sql to English not English to SQL correct me If I am wrong",
"English to SQL is t5-base-finetuned-wikiSQL or English to SQL is t5-small-finetuned-wikiSQL",
"https://github.com/mrm8488/shared_colab_notebooks/blob/master/T5_finetuned_wikiSQL_demo.ipynb",
"The main issue is solved, closing this for now. Feel free to re-open if the problem persists."
] | 1,595 | 1,596 | 1,596 | NONE | null | from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL-sql-to-en")
model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL-sql-to-en") | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5907/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5906 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5906/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5906/comments | https://api.github.com/repos/huggingface/transformers/issues/5906/events | https://github.com/huggingface/transformers/issues/5906 | 661,653,844 | MDU6SXNzdWU2NjE2NTM4NDQ= | 5,906 | Word frequencies in TransfoXLTokenizer | {
"login": "GregoireMialon",
"id": 24235883,
"node_id": "MDQ6VXNlcjI0MjM1ODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/24235883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GregoireMialon",
"html_url": "https://github.com/GregoireMialon",
"followers_url": "https://api.github.com/users/GregoireMialon/followers",
"following_url": "https://api.github.com/users/GregoireMialon/following{/other_user}",
"gists_url": "https://api.github.com/users/GregoireMialon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GregoireMialon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GregoireMialon/subscriptions",
"organizations_url": "https://api.github.com/users/GregoireMialon/orgs",
"repos_url": "https://api.github.com/users/GregoireMialon/repos",
"events_url": "https://api.github.com/users/GregoireMialon/events{/privacy}",
"received_events_url": "https://api.github.com/users/GregoireMialon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | I was wondering if it is still possible to access word frequencies through the populated counter of TransfoXLTokenizer? For example, `tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')` seems to have an empty counter. This is referring to:
"So p_M(S) is just the output of the model right?
For p_u(S), I think the easiest is probably to use the empirical probabilities.
`TransfoXLTokenizer` has a counter to store words frequencies [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization_transfo_xl.py#L98) which should be populated in the "pretrained" tokenizer so I would use and normalize this to get unconditional probabilities for each word and then compute SLOR."
_Originally posted by @thomwolf in https://github.com/huggingface/transformers/issues/477#issuecomment-483973033_ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5906/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5905 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5905/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5905/comments | https://api.github.com/repos/huggingface/transformers/issues/5905/events | https://github.com/huggingface/transformers/issues/5905 | 661,609,187 | MDU6SXNzdWU2NjE2MDkxODc= | 5,905 | Retrain/reuse fine-tuned models on a different set of labels | {
"login": "kevin-yauris",
"id": 31723333,
"node_id": "MDQ6VXNlcjMxNzIzMzMz",
"avatar_url": "https://avatars.githubusercontent.com/u/31723333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kevin-yauris",
"html_url": "https://github.com/kevin-yauris",
"followers_url": "https://api.github.com/users/kevin-yauris/followers",
"following_url": "https://api.github.com/users/kevin-yauris/following{/other_user}",
"gists_url": "https://api.github.com/users/kevin-yauris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kevin-yauris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kevin-yauris/subscriptions",
"organizations_url": "https://api.github.com/users/kevin-yauris/orgs",
"repos_url": "https://api.github.com/users/kevin-yauris/repos",
"events_url": "https://api.github.com/users/kevin-yauris/events{/privacy}",
"received_events_url": "https://api.github.com/users/kevin-yauris/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@kevin-yauris I had a similar problem with retraining fine-tuned model. Here is what I have done. \r\n\r\nDo not pass config parameter when creating your model with `from_pretrained()`. Just initialize it with something like this: \r\n```\r\nmodel = AutoModelForTokenClassification.from_pretrained(\r\n model_name,\r\n from_tf=bool(\".ckpt\" in model_name),\r\n cache_dir=cache_dir,\r\n )\r\n```\r\n\r\nThen, you will need to change the last layer in the model. I was using PyTorch to fine-tune a blank model initially, therefore these steps will work for PyTorch models.\r\n\r\nThe last layer in the `TokenClassification` model is called `classification`. It is simply a linear layer, so you can create new one with the correct shape and randomized weights, and assign it to the initialized model `classification` layer. Say before my layer was (768,5) with the initial 5 classes, and now I want 9 so make a final layer with shape (768,9).\r\n\r\n```\r\n#reinitiallize the final classification layer to match new number of labels\r\nmodel.classifier = torch.nn.Linear(in_features=model.classifier.in_features, out_features=config.num_labels, bias=True)\r\nmodel.config = config\r\nmodel.num_labels = config.num_labels\r\n```\r\nSince to initialize the model you will be loading config file from the fine-tuned model, you also want to change model config to your current one with the new classes, so the correct config gets exported after your model is trained. Also you will want to modify `num_labels` of the model, since that was initialized with the old number of classes in the old config.\r\n",
"Hi @TarasPriadka thank you for answering\r\nI also to the same thing that you did but with Tensorflow https://discuss.huggingface.co/t/retrain-reuse-fine-tuned-models-on-different-set-of-labels/346/5?u=kevinyauris.\r\nI forget about the model.num_labels tho, thank you for the catch.\r\nI wonder if there is another way to do it since if we replace the last layer with randomized weights we can't use the learned weight for some labels that are the same with previous labels/classes.\r\nLet's say there are 3 classes in the initial model and now I want to add 1 more class but the other classes are the same. If we use this method all weights for the last layer are randomized and we need to fine-tune the model with all the data again instead of just give train data for the new class.",
"@kevin-yauris I've seen your forum post since I've been looking for a solution. My idea is that you already have an `id2label` and `label2id` in the model, so you could find if the incoming labels are already trained in the fine-tuned model. You find those which are not and you add randomized layers for them. However I am not sure how you can take a layer, and then just add randomized rows to it.",
"Hi @TarasPriadka ,\r\n\r\nThanks for sharing the solution. I followed the same steps which solved this error \r\n\r\nRuntimeError: Error(s) in loading state_dict for BertForTokenClassification:\r\nsize mismatch for classifier.weight: copying a param with shape torch.Size([17, 1024]) from checkpoint, the shape in current model is torch.Size([13, 1024]).\r\nsize mismatch for classifier.bias: copying a param with shape torch.Size([17]) from checkpoint, the shape in current model is torch.Size([13]).\r\n\r\n\r\nbut now, it throws another error -\r\n\r\n model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None\r\n File \"/usr/local/lib/python3.7/site-packages/transformers/trainer.py\", line 514, in train\r\n optimizer.step()\r\n File \"/usr/local/lib/python3.7/site-packages/torch/optim/lr_scheduler.py\", line 67, in wrapper\r\n return wrapped(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/transformers/optimization.py\", line 244, in step\r\n exp_avg.mul_(beta1).add_(grad, alpha=1.0 - beta1)\r\nRuntimeError: The size of tensor a (17) must match the size of tensor b (13) at non-singleton dimension 0\r\n\r\n\r\nI had first trained the model on a dataset having 17 classes and now I want to transfer this model to the 2nd dataset which has 13 labels. \r\n\r\nDo we have to change the num_labels for any other layer ?\r\n\r\nThanks,",
"@vikas95 I am not sure, but just changing the model's `num_labels` seemed to be working for me. However, I was scaling up labels, not reducing them. I would assume that it should have the same solution. Maybe you can share your model's layers before and after applying my fix with `print(model)`, and we can take a look into a possible solution.",
"Hi @TarasPriadka ,\r\n\r\nThanks for the suggestion, I printed the model after loading the checkpoint and after updating the classification layer. \r\n\r\nThe classification layer output dimension is changing from your mentioned solution i.e.,\r\n\r\ninitially after loading the checkpoint, model = AutoModelForTokenClassification.from_pretrained(\r\n model_args.model_name_or_path,\r\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\r\n cache_dir=model_args.cache_dir,\r\n )\r\nThe classification layer size is - (classifier): Linear(in_features=1024, out_features=17, bias=True)\r\n\r\nand after updating the classification layer, the size is - (classifier): Linear(in_features=1024, out_features=13, bias=True)\r\n\r\nThe rest of the layers look similar but I am still not sure why its throwing the previously mentioned error. \r\n\r\n-Vikas",
"@vikas95, so the shape of the model is fine. The issue is in `exp_avg.mul_(beta1).add_(grad, alpha=1.0 - beta1)`. The exception highlights that `exp_avg` is of size 17 and its trying to add `grad` which is 13. So the problem is in `exp_avg`, since it wasn't updated along with everything else. Can you share the whole chunk of code where you initialize the model, trainer, etc?",
"Hi @TarasPriadka ,\r\n\r\nHere is the part where I initialize the model (which is from run_ner.py (https://github.com/huggingface/transformers/blob/6e8a38568eb874f31eb49c42285c3a634fca12e7/examples/token-classification/run_ner.py#L158)) -\r\n\r\n labels = get_labels(data_args.labels)\r\n label_map = {i: label for i, label in enumerate(labels)}\r\n num_labels = len(labels)\r\n \r\n config = AutoConfig.from_pretrained(\r\n model_args.config_name if model_args.config_name else model_args.model_name_or_path,\r\n num_labels=num_labels,\r\n id2label=label_map,\r\n label2id={label: i for i, label in enumerate(labels)},\r\n cache_dir=model_args.cache_dir,\r\n )\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(\r\n model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\r\n cache_dir=model_args.cache_dir,\r\n use_fast=model_args.use_fast,\r\n )\r\n model = AutoModelForTokenClassification.from_pretrained(\r\n model_args.model_name_or_path,\r\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\r\n cache_dir=model_args.cache_dir,\r\n )\r\n\r\n model.classifier = torch.nn.Linear(in_features=model.classifier.in_features, out_features=config.num_labels, bias=True)\r\n model.config = config \r\n model.num_labels = config.num_labels\r\n",
"@vikas95 Can you also share the trainer code",
"@TarasPriadka - Its the same as in run_ner.py \r\nI haven't changed any other part of the code. \r\n",
"@vikas95 can you check if in your model folder you have this file `optimizer.pt` and `scheduler.pt`\r\n",
"@TarasPriadka ,\r\n\r\nThanks for the help, I was giving a specific checkpoint directory as the model path i.e., \"datasetA_model/checkpoint-6000/\" which had both optimizer.pt and scheduler.pt\r\n\r\nbut then I changed the model path to just \"datasetA_model/\" and it works fine with no errors. \r\nI am guessing that if I just give the \"datasetA_model/\" as model path then it would select the highest checkpoint ? \r\n\r\nAnyway, thanks a lot for looking at the problem and for all the quick responses and help 😬\r\n",
"@vikas95 This was a great deal of fun. When you are running \r\n```\r\ntrainer.train(\r\n model_path=model_name if os.path.isdir(model_name) else None\r\n )\r\n```\r\ntrainer loads in those files, and initializes Adam optimizer with them. Optimizer breaks since you are changing the shape of the output layer, but optimizer was initialized with the other shape. What you can do, is either delete that file, or just run `trainer.train()` without parameters.",
"Cool, this makes sense. \r\nThanks again for the explanation. I was actually trying with just trainer.train() for last 30 minutes and it works fine. \r\n\r\nThanks again for all the help and explanations. ",
"Does anyone know what is the alternative method in Pytorch?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Thank you all for this solution, it worked for me but I encountered another problem while training getting this error:\r\n`ValueError: Expected input batch_size (3200) to match target batch_size (32).`\r\nMy batch size is indeed 32. If I change it to other value e.g. 16 the error will be:\r\n`ValueError: Expected input batch_size (1600) to match target batch_size (16).` it always multiplies by 100 which is a weird behavior because when trying to run the exact same code but on an original pre-trained model (in my case is `xlm-roberta-base`), to fine-tune it on classification task, it works just fine.\r\n\r\n\r\nHere is my code:\r\n```\r\nconfig = XLMRobertaConfig.from_pretrained(\"../xlm-roberta_domains_classifier/model\", output_hidden_states=True, \r\n num_labels=len(train_df.label.unique()),\r\n id2label=id2label, label2id=label2id)\r\nmodel = XLMRobertaForSequenceClassification.from_pretrained('../xlm-roberta_domains_classifier/model')\r\nmodel.cuda()\r\nmodel.classifier = torch.nn.Linear(in_features=model.classifier.out_proj.in_features, out_features=config.num_labels, bias=True)\r\nmodel.config = config\r\nmodel.num_labels = config.num_labels\r\ntokenizer = XLMRobertaTokenizer.from_pretrained('../xlm-roberta_domains_classifier/model')\r\nmodel.cuda()\r\n```\r\n\r\nModel summary:\r\n```\r\nXLMRobertaForSequenceClassification(\r\n (roberta): RobertaModel(\r\n (embeddings): RobertaEmbeddings(\r\n (word_embeddings): Embedding(250002, 768, padding_idx=1)\r\n (position_embeddings): Embedding(514, 768, padding_idx=1)\r\n (token_type_embeddings): Embedding(1, 768)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (encoder): RobertaEncoder(\r\n (layer): ModuleList(\r\n (0): RobertaLayer(\r\n (attention): RobertaAttention(\r\n (self): RobertaSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): RobertaSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): RobertaIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): RobertaOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (1): RobertaLayer(\r\n (attention): RobertaAttention(\r\n (self): RobertaSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): RobertaSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): RobertaIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): RobertaOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (2): RobertaLayer(\r\n (attention): RobertaAttention(\r\n (self): RobertaSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): RobertaSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): RobertaIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): RobertaOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (3): RobertaLayer(\r\n (attention): RobertaAttention(\r\n (self): RobertaSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): RobertaSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): RobertaIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): RobertaOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (4): RobertaLayer(\r\n (attention): RobertaAttention(\r\n (self): RobertaSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): RobertaSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): RobertaIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): RobertaOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (5): RobertaLayer(\r\n (attention): RobertaAttention(\r\n (self): RobertaSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): RobertaSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): RobertaIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): RobertaOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (6): RobertaLayer(\r\n (attention): RobertaAttention(\r\n (self): RobertaSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): RobertaSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): RobertaIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): RobertaOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (7): RobertaLayer(\r\n (attention): RobertaAttention(\r\n (self): RobertaSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): RobertaSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): RobertaIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): RobertaOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (8): RobertaLayer(\r\n (attention): RobertaAttention(\r\n (self): RobertaSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): RobertaSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): RobertaIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): RobertaOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (9): RobertaLayer(\r\n (attention): RobertaAttention(\r\n (self): RobertaSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): RobertaSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): RobertaIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): RobertaOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (10): RobertaLayer(\r\n (attention): RobertaAttention(\r\n (self): RobertaSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): RobertaSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): RobertaIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): RobertaOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (11): RobertaLayer(\r\n (attention): RobertaAttention(\r\n (self): RobertaSelfAttention(\r\n (query): Linear(in_features=768, out_features=768, bias=True)\r\n (key): Linear(in_features=768, out_features=768, bias=True)\r\n (value): Linear(in_features=768, out_features=768, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): RobertaSelfOutput(\r\n (dense): Linear(in_features=768, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): RobertaIntermediate(\r\n (dense): Linear(in_features=768, out_features=3072, bias=True)\r\n )\r\n (output): RobertaOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n )\r\n )\r\n )\r\n (classifier): Linear(in_features=768, out_features=12, bias=False)\r\n)\r\n```\r\n\r\n\r\nData preparation:\r\n```\r\ntrain_encoding = tokenizer(train_df.text.to_list(), return_tensors='pt', padding=True, truncation=True).to(device)\r\ntrain_input_ids = train_encoding['input_ids'].to(device)\r\ntrain_attention_mask = train_encoding['attention_mask'].to(device)\r\ntrain_labels = torch.tensor(train_df.label.to_list()).unsqueeze(0).to(device)[0]\r\n\r\nval_encoding = tokenizer(val_df.text.to_list(), return_tensors='pt', padding=True, truncation=True).to(device)\r\nval_input_ids = val_encoding['input_ids'].to(device)\r\nval_attention_mask = val_encoding['attention_mask'].to(device)\r\nval_labels = torch.tensor(val_df.label.to_list()).unsqueeze(0).to(device)[0]\r\n\r\ntest_encoding = tokenizer(test_df.text.to_list(), return_tensors='pt', padding=True, truncation=True).to(device)\r\ntest_input_ids = test_encoding['input_ids'].to(device)\r\ntest_attention_mask = test_encoding['attention_mask'].to(device)\r\ntest_labels = torch.tensor(test_df.label.to_list()).unsqueeze(0).to(device)[0]\r\n\r\n\r\nbatch_size = 32\r\ntrain_data = TensorDataset(train_input_ids, train_attention_mask, train_labels)\r\ntrain_sampler = RandomSampler(train_data)\r\ntrain_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)\r\n\r\nvalidation_data = TensorDataset(val_input_ids, val_attention_mask, val_labels)\r\nvalidation_sampler = SequentialSampler(validation_data)\r\nvalidation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size)\r\n\r\ntest_data = TensorDataset(test_input_ids, test_attention_mask, test_labels)\r\ntest_sampler = SequentialSampler(test_data)\r\ntest_dataloader = DataLoader(test_data, sampler=test_sampler, batch_size=batch_size)\r\n```\r\n\r\nTraining logic:\r\n```\r\noptimizer = AdamW(model.parameters(),\r\n lr = 4e-5, \r\n eps = 1e-8 # args.adam_epsilon - default is 1e-8.\r\n )\r\nfrom transformers import get_linear_schedule_with_warmup\r\nepochs = 3\r\ntotal_steps = len(train_dataloader) * epochs\r\nscheduler = get_linear_schedule_with_warmup(optimizer, \r\n num_warmup_steps = 0, # Default value in run_glue.py\r\n num_training_steps = total_steps)\r\n\r\n\r\n\r\nseed_val = 42\r\nrandom.seed(seed_val)\r\nnp.random.seed(seed_val)\r\ntorch.manual_seed(seed_val)\r\ntorch.cuda.manual_seed_all(seed_val)\r\n# Store the average loss after each epoch so we can plot them.\r\nloss_values = []\r\n# For each epoch...\r\nfor epoch_i in range(0, epochs):\r\n \r\n # ========================================\r\n # Training\r\n # ========================================\r\n \r\n print(\"\")\r\n print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))\r\n print('Training...')\r\n t0 = time.time()\r\n total_loss = 0\r\n model.train()\r\n for step, batch in enumerate(train_dataloader):\r\n if step % 50 == 0 and not step == 0:\r\n elapsed = format_time(time.time() - t0)\r\n \r\n print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))\r\n b_input_ids = batch[0].to(device)\r\n b_input_mask = batch[1].to(device)\r\n b_labels = batch[2].to(device) \r\n model.zero_grad() \r\n \r\n outputs = model(b_input_ids, \r\n token_type_ids=None, \r\n attention_mask=b_input_mask, \r\n labels=b_labels)\r\n \r\n loss = outputs[0]\r\n total_loss += loss.item()\r\n loss.backward()\r\n torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)\r\n optimizer.step()\r\n scheduler.step()\r\n avg_train_loss = total_loss / len(train_dataloader) \r\n \r\n loss_values.append(avg_train_loss)\r\n print(\"\")\r\n print(\" Average training loss: {0:.2f}\".format(avg_train_loss))\r\n print(\" Training epcoh took: {:}\".format(format_time(time.time() - t0)))\r\n \r\n # ========================================\r\n # Validation\r\n # ========================================\r\n print(\"\")\r\n print(\"Running Validation...\")\r\n t0 = time.time()\r\n model.eval()\r\n eval_loss, eval_accuracy = 0, 0\r\n nb_eval_steps, nb_eval_examples = 0, 0\r\n for batch in validation_dataloader:\r\n \r\n batch = tuple(t.to(device) for t in batch)\r\n \r\n b_input_ids, b_input_mask, b_labels = batch\r\n \r\n with torch.no_grad(): \r\n outputs = model(b_input_ids, \r\n token_type_ids=None, \r\n attention_mask=b_input_mask)\r\n \r\n logits = outputs[0]\r\n logits = logits.detach().cpu().numpy()\r\n label_ids = b_labels.to('cpu').numpy()\r\n \r\n tmp_eval_accuracy = flat_accuracy(logits, label_ids)\r\n \r\n eval_accuracy += tmp_eval_accuracy\r\n nb_eval_steps += 1\r\n print(\" Accuracy: {0:.2f}\".format(eval_accuracy/nb_eval_steps))\r\n print(\" Validation took: {:}\".format(format_time(time.time() - t0)))\r\nprint(\"\")\r\nprint(\"Training complete!\")\r\n\r\n```\r\n\r\nThe error occurs when reaching this line:\r\n```\r\noutputs = model(b_input_ids, \r\n token_type_ids=None, \r\n attention_mask=b_input_mask, \r\n labels=b_labels)\r\n```"
] | 1,595 | 1,610 | 1,606 | NONE | null | # ❓ Questions & Help
## Details
Hello,
I am wondering is it possible to reuse or retrain a fine-tuned model with a new set of labels(the set of labels contain new labels or the new set of labels is a subset of the labels used to fine-tune the model)?
What I try to do is fine-tune pre-trained models for a task (e.g. NER) in the domain free dataset, then reuse/retrain this fine-tuned model to do a similar task but in a more specific domain (e.g. NER for healthcare), thus in this specific domain, the set of labels may not be the same.
I already try to fine-tune a BERT model to do NER on WNUT17 data based on token classification example in Transformers GitHub. After that, I try to retrain the fine-tuned model by adding a new label and provide train data that has this label, the train failed with error:
```
RuntimeError: Error(s) in loading state_dict for BertForTokenClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([13, 1024]) from checkpoint, the shape in current model is torch.Size([15, 1024]).
size mismatch for classifier.bias: copying a param with shape torch.Size([13]) from checkpoint, the shape in current model is torch.Size([15]).
```
Is it possible to do this with Transformers and if so how? Maybe there is a method that can do something like [this](https://spacy.io/api/entityrecognizer#add_label)(the method is from spaCy). Thank you in advance!
I already post this in the forum
[Retrain/reuse fine-tuned models on a different set of labels](https://discuss.huggingface.co/t/retrain-reuse-fine-tuned-models-on-different-set-of-labels/346) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5905/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5904 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5904/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5904/comments | https://api.github.com/repos/huggingface/transformers/issues/5904/events | https://github.com/huggingface/transformers/issues/5904 | 661,484,966 | MDU6SXNzdWU2NjE0ODQ5NjY= | 5,904 | RobertaTokenizerFast unexpectedly quits when creating a TextDataset | {
"login": "josiahdavis",
"id": 6405428,
"node_id": "MDQ6VXNlcjY0MDU0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6405428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josiahdavis",
"html_url": "https://github.com/josiahdavis",
"followers_url": "https://api.github.com/users/josiahdavis/followers",
"following_url": "https://api.github.com/users/josiahdavis/following{/other_user}",
"gists_url": "https://api.github.com/users/josiahdavis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/josiahdavis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/josiahdavis/subscriptions",
"organizations_url": "https://api.github.com/users/josiahdavis/orgs",
"repos_url": "https://api.github.com/users/josiahdavis/repos",
"events_url": "https://api.github.com/users/josiahdavis/events{/privacy}",
"received_events_url": "https://api.github.com/users/josiahdavis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@n1t0 ",
"This seems to work for me, I guess it crashes because you don't have enough memory. Unfortunately `TextDataset` has not been optimized for fast tokenizers yet, so it does a lot more work than needed when using them. It's probably better to use python tokenizers for now with `TextDataset`.\r\n\r\nAlso, maybe the [huggingface/nlp](https://github.com/huggingface/nlp) library might be better suited here. cc @lhoestq ",
"You could try\r\n```python\r\nfrom transformers import AutoTokenizer\r\nfrom nlp import load_dataset\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"roberta-base\", use_fast=True)\r\ndataset = load_dataset(\"text\", data_files=\"path/to/wiki.train.raw\", split=\"train\")\r\ntokenized_dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"]), batched=True)\r\nprint(tokenized_dataset[0][\"input_ids\"])\r\n```\r\nWe're still working on making it as fast as we can, but at least you won't have any memory issues.",
"Re @n1t0 comment: \"I guess it crashes because you don't have enough memory\" this is correct. (I was hoping I could get away with 61.0 GiB, the standard for an AWS `p3.2xlarge`.)\r\n\r\nRe @lhoestq your code ran without errors for me. Thanks! \r\n\r\nI did get a lot of the [`Token indices sequence length is longer than the specified maximum sequence length for this model (522 > 512). Running this sequence through the model will result in indexing errors`](https://github.com/huggingface/transformers/issues/1791) warnings which I wasn't getting before.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | # 🐛 Bug
When creating a `TextDataset` using `RobertaTokenizerFast` my program unexpectedly dies. (Not so with `RobertaTokenizer`).
## Information
Model I am using: RoBERTa
Language I am using the model on: English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: [language modelling](https://github.com/huggingface/transformers/blob/33d3072e1c54bcd235447b98c6dea1b4cb71234c/examples/run_lm_finetuning.py)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import AutoTokenizer, TextDataset
tokenizer = AutoTokenizer.from_pretrained("roberta-base", use_fast=True)
train_dataset = TextDataset(
tokenizer=tokenizer,
file_path="/home/ubuntu/data/wikitext-103-raw/wiki.train.raw",
block_size=-1,
overwrite_cache=False,
)
print(train_dataset)
```
## Expected behavior
Creation of the training dataset, not having the process killed. eg:
```
<transformers.data.datasets.language_modeling.TextDataset object at 0x7f138a1fd2b0>
```
## Environment info
- `transformers` version: 2.11.0
- Platform: Linux-5.3.0-1030-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5904/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5903 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5903/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5903/comments | https://api.github.com/repos/huggingface/transformers/issues/5903/events | https://github.com/huggingface/transformers/pull/5903 | 661,474,875 | MDExOlB1bGxSZXF1ZXN0NDUzMTQ0MDA1 | 5,903 | [WIP] Add Theseus Compression | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5903?src=pr&el=h1) Report\n> Merging [#5903](https://codecov.io/gh/huggingface/transformers/pull/5903?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/492bb6aa486856f8243dfeb533ed1b23e996e403?el=desc) will **decrease** coverage by `2.80%`.\n> The diff coverage is `83.75%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5903?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5903 +/- ##\n==========================================\n- Coverage 80.12% 77.31% -2.81% \n==========================================\n Files 169 152 -17 \n Lines 32317 26290 -6027 \n==========================================\n- Hits 25893 20326 -5567 \n+ Misses 6424 5964 -460 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5903?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/theseus/theseus\\_list.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90aGVzZXVzL3RoZXNldXNfbGlzdC5weQ==) | `67.64% <67.64%> (ø)` | |\n| [src/transformers/theseus/theseus\\_module.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90aGVzZXVzL3RoZXNldXNfbW9kdWxlLnB5) | `88.88% <88.88%> (ø)` | |\n| [src/transformers/theseus/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90aGVzZXVzL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/theseus/layerdrop\\_list.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90aGVzZXVzL2xheWVyZHJvcF9saXN0LnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/theseus/mixout\\_list.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90aGVzZXVzL21peG91dF9saXN0LnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/theseus/theseus\\_errors.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90aGVzZXVzL3RoZXNldXNfZXJyb3JzLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/commands/env.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9lbnYucHk=) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |\n| [src/transformers/commands/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9fX2luaXRfXy5weQ==) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |\n| [src/transformers/commands/transformers\\_cli.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFuc2Zvcm1lcnNfY2xpLnB5) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.68%)` | :arrow_down: |\n| ... and [166 more](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5903?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5903?src=pr&el=footer). Last update [492bb6a...20dd34f](https://codecov.io/gh/huggingface/transformers/pull/5903?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"which examples have you tested this with?",
"> which examples have you tested this with?\n\nrun_glue and run_ner but more to come!",
"Want an early review for this? Would be wonderful! And it can save me some time before I do the docs. @sshleifer ",
"> run_glue and run_ner but more to come! \r\n\r\n:heart_eyes: can't wait to re-fine-tune my NER models :hugs: ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Well I've been really busy recently but don't close it for me stalebot!",
"Thanks for reopening it @LysandreJik! I was stuck with some details and I'll probably get it done soon.",
"Sounds great, looking forward to it!",
"Will move to an independent package. Closing this."
] | 1,595 | 1,651 | 1,623 | CONTRIBUTOR | null | `transformers.theseus` provides the implementation for BERT-of-Theseus, LayerDrop and Mixout.
Original BERT-of-Theseus authors: @JetRunner @MichaelZhouwang
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5903/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5903",
"html_url": "https://github.com/huggingface/transformers/pull/5903",
"diff_url": "https://github.com/huggingface/transformers/pull/5903.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5903.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5902 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5902/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5902/comments | https://api.github.com/repos/huggingface/transformers/issues/5902/events | https://github.com/huggingface/transformers/issues/5902 | 661,394,792 | MDU6SXNzdWU2NjEzOTQ3OTI= | 5,902 | 🐛 BART : Same representations for different `<s>` tokens | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Very strange indeed. Nothing comes to mind, but if you can show the fairseq discrepancies I can take a look.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@astariul Hi I also encounter this problem. The finetuned BART just gives the same output for `<s>` under eval mode, have you fixed it? ",
"@turing-yfqiu I'm sorry it's long time ago, I don't remember how I did...."
] | 1,595 | 1,633 | 1,601 | CONTRIBUTOR | null | # 🐛 Bug
## Context
I'm trying to use BART for a sentence classification task. So I encode the input with the following format :
```
<s> Sen1 </s> <s> Sen2 </s> <s> Sen3 </s> ...
```
And use `<s>` as sentence representation (from the encoder, not decoder). Then I classify these representations.
---
I trained my model but the classification give random choice. After debugging, I noticed that the encoder produce always the same representation for `<s>` token.
## Bug
**The representation of `<s>` token is always the same, no matter where they appear in the input.**
Here is a notebook reproducing the issue : [Colab](https://colab.research.google.com/drive/1mqKKFAEGEwa5XbkJtm7_VrmQ8L3_0Bnt?usp=sharing)
In this notebook I simply encode an input, modify the input to add an additional `<s>` token, forward it through BART and compare the encoder representation of `<s>`. It gives me :
```
tensor([-0.0097, 0.0075, 0.0086, ..., 0.0041, -0.0085, -0.0011],
grad_fn=<SelectBackward>)
tensor([-0.0097, 0.0075, 0.0086, ..., 0.0041, -0.0085, -0.0011],
grad_fn=<SelectBackward>)
```
---
Now if I do the same with `</s>` token, their representation is different :
```
tensor([ 0.0658, 0.0161, -0.0062, ..., -0.0536, -0.0515, 0.1837],
grad_fn=<SelectBackward>)
tensor([ 0.0627, 0.0576, 0.0408, ..., -0.0406, -0.0765, 0.1689],
grad_fn=<SelectBackward>)
```
Which makes sense because they represent different thing...
## Note 1
I think this is a bug because I remember a few months ago, I tried to do the same (use `<s>` representation for classification) with fairseq version of BART, and it worked...
## Note 2
I'm aware the example in the Notebook is using just the pre-trained version of BART, so the `<s>` representation does not represent sentence. But even after training the model, the exact same behavior arise.
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5902/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5901 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5901/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5901/comments | https://api.github.com/repos/huggingface/transformers/issues/5901/events | https://github.com/huggingface/transformers/issues/5901 | 661,304,258 | MDU6SXNzdWU2NjEzMDQyNTg= | 5,901 | How can I check the loss during pretraing huggingface/transformers | {
"login": "YuBeomGon",
"id": 44599580,
"node_id": "MDQ6VXNlcjQ0NTk5NTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/44599580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YuBeomGon",
"html_url": "https://github.com/YuBeomGon",
"followers_url": "https://api.github.com/users/YuBeomGon/followers",
"following_url": "https://api.github.com/users/YuBeomGon/following{/other_user}",
"gists_url": "https://api.github.com/users/YuBeomGon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YuBeomGon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YuBeomGon/subscriptions",
"organizations_url": "https://api.github.com/users/YuBeomGon/orgs",
"repos_url": "https://api.github.com/users/YuBeomGon/repos",
"events_url": "https://api.github.com/users/YuBeomGon/events{/privacy}",
"received_events_url": "https://api.github.com/users/YuBeomGon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2155169140,
"node_id": "MDU6TGFiZWwyMTU1MTY5MTQw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/trainer",
"name": "trainer",
"color": "2ef289",
"default": false,
"description": ""
}
] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
] | [
"training loss should be printed every 500 iteration, but there is no log in pretraing.\r\n(parser.add_argument(\"--logsteps\", help=\"logging steps\", type=int, default=500)",
"Hi! This wasn't intentional, so we've fixed it with #6097. If you rerun the script, you should see the losses now."
] | 1,595 | 1,595 | 1,595 | NONE | null |
thanks in advance.
I trained roberta model from scratch.
But I cant check the training loss during pretraining.
I did it by refering below link.
https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb
in above link, loss is printed every 500 steps,
but when I did, there is no loss print.
Iteration: 100%|█████████▉| 20703/20711 [4:42:54<00:07, 1.14it/s][A
Iteration: 100%|█████████▉| 20704/20711 [4:42:54<00:05, 1.24it/s][A
Iteration: 100%|█████████▉| 20705/20711 [4:42:55<00:05, 1.20it/s][A
Iteration: 100%|█████████▉| 20706/20711 [4:42:56<00:04, 1.18it/s][A
Iteration: 100%|█████████▉| 20707/20711 [4:42:57<00:03, 1.19it/s][A
Iteration: 100%|█████████▉| 20708/20711 [4:42:58<00:02, 1.16it/s][A
Iteration: 100%|█████████▉| 20709/20711 [4:42:59<00:01, 1.14it/s][A
Iteration: 100%|█████████▉| 20710/20711 [4:43:00<00:00, 1.13it/s][A
Iteration: 100%|██████████| 20711/20711 [4:43:00<00:00, 1.45it/s][A
Iteration: 100%|██████████| 20711/20711 [4:43:00<00:00, 1.22it/s]
Epoch: 100%|██████████| 13/13 [61:14:16<00:00, 16952.06s/it]
Epoch: 100%|██████████| 13/13 [61:14:16<00:00, 16958.16s/it]
compress roberta.20200717.zip on ./pretrained
save roberta.20200717.zip on minio(petcharts)
stackoverflow link
https://stackoverflow.com/questions/62988081/checking-pretraining-loss-in-huggingface-transformers | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5901/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5900 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5900/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5900/comments | https://api.github.com/repos/huggingface/transformers/issues/5900/events | https://github.com/huggingface/transformers/issues/5900 | 661,287,763 | MDU6SXNzdWU2NjEyODc3NjM= | 5,900 | Is there any api for intermediate layer outputs? | {
"login": "mt324010",
"id": 35977320,
"node_id": "MDQ6VXNlcjM1OTc3MzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/35977320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mt324010",
"html_url": "https://github.com/mt324010",
"followers_url": "https://api.github.com/users/mt324010/followers",
"following_url": "https://api.github.com/users/mt324010/following{/other_user}",
"gists_url": "https://api.github.com/users/mt324010/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mt324010/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mt324010/subscriptions",
"organizations_url": "https://api.github.com/users/mt324010/orgs",
"repos_url": "https://api.github.com/users/mt324010/repos",
"events_url": "https://api.github.com/users/mt324010/events{/privacy}",
"received_events_url": "https://api.github.com/users/mt324010/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You need to pass `output_hidden_states=True` when calling your model, or set this option when instantiating it. For instance, if you're using a pretrained model:\r\n```\r\nfrom transformers import DistilBertModel\r\nmodel = DistilBertModel.from_pretrained('distilbert-base-uncased', output_hidden_states=True)\r\n```"
] | 1,595 | 1,595 | 1,595 | NONE | null | I would like to get all 12 Bertlayer outputs. Can you please help? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5900/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5899 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5899/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5899/comments | https://api.github.com/repos/huggingface/transformers/issues/5899/events | https://github.com/huggingface/transformers/issues/5899 | 661,263,261 | MDU6SXNzdWU2NjEyNjMyNjE= | 5,899 | Smaller output vocabulary for GPT-2 | {
"login": "iRove108",
"id": 12959037,
"node_id": "MDQ6VXNlcjEyOTU5MDM3",
"avatar_url": "https://avatars.githubusercontent.com/u/12959037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iRove108",
"html_url": "https://github.com/iRove108",
"followers_url": "https://api.github.com/users/iRove108/followers",
"following_url": "https://api.github.com/users/iRove108/following{/other_user}",
"gists_url": "https://api.github.com/users/iRove108/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iRove108/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iRove108/subscriptions",
"organizations_url": "https://api.github.com/users/iRove108/orgs",
"repos_url": "https://api.github.com/users/iRove108/repos",
"events_url": "https://api.github.com/users/iRove108/events{/privacy}",
"received_events_url": "https://api.github.com/users/iRove108/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | # ❓ Questions & Help
I noticed that by default, GPT2LMHeadModel returns prediction scores of shape (batch_size, sequence_length, config.vocab_size) ([docs link](https://huggingface.co/transformers/model_doc/gpt2.html#gpt2lmheadmodel)). Is there any way for me to limit the output vocabulary to only a subset of words?
I want to take the existing weights from GPT-2, but re-train a new top linear layer with a smaller vocabulary. I suppose I could mask the logits at the end, but then it feels like a waste of computational power to even predict them.
**A link to [original question on the forum](https://discuss.huggingface.co/t/smaller-output-vocabulary-for-gpt-2/366)** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5899/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5898 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5898/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5898/comments | https://api.github.com/repos/huggingface/transformers/issues/5898/events | https://github.com/huggingface/transformers/issues/5898 | 661,205,588 | MDU6SXNzdWU2NjEyMDU1ODg= | 5,898 | Fix pack_dataset.py | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | right now it just sorts the examples and prints an incorrect logger message! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5898/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5897 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5897/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5897/comments | https://api.github.com/repos/huggingface/transformers/issues/5897/events | https://github.com/huggingface/transformers/issues/5897 | 661,205,019 | MDU6SXNzdWU2NjEyMDUwMTk= | 5,897 | examples/seq2seq: add label_smoothing cross entropy option | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Started! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5897/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5896 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5896/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5896/comments | https://api.github.com/repos/huggingface/transformers/issues/5896/events | https://github.com/huggingface/transformers/issues/5896 | 661,204,477 | MDU6SXNzdWU2NjEyMDQ0Nzc= | 5,896 | MT: automate/experiment with pruning embeddings | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | CONTRIBUTOR | null | This isn't really a github issue, but follow pytorch/fairseq#2120 and see if it works. If it does, make it easy to use! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5896/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5895 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5895/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5895/comments | https://api.github.com/repos/huggingface/transformers/issues/5895/events | https://github.com/huggingface/transformers/issues/5895 | 661,203,281 | MDU6SXNzdWU2NjEyMDMyODE= | 5,895 | examples/seq2seq/finetune.py and BART supports TPU | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Do you want to wait for a stable release for the torch-xla? Also shouldn't the end user pass in \"(num_tpu_cores=8)\" when they are creating the lightning's trainer. It should automatically handle the rest for us i think. Also we have \"bloat16\" for TPUs as well.",
"It totally could work out of the box. In which case this issue could be as simple as running a shell command on a tpu machine, seeing that it works well, and then checking in the working command or commenting here :).\r\n",
"Alright cool! Let's see if I can make an attempt to do it! Very new to this examples section of hugging face. ",
"Hey! So i tried running it on the [colab](https://colab.research.google.com/drive/16q2GWrnZ0Tjg1OxJQUcaWKCWwn3Jh5z0?usp=sharing) first, but it seems there's some error wrt bart model. The traceback it threw, (another issue open as well https://github.com/huggingface/transformers/issues/5915).\r\n\r\nTo reproduce the same, I executed the finetune.sh with\r\n```bash\r\n!sh finetune.sh \\\r\n--data_dir /content/xsum \\\r\n--model_name_or_path facebook/bart-base \\\r\n--output_dir=xsum_results \\\r\n--train_batch_size=2 \\\r\n--eval_batch_size=2 \\\r\n--num_train_epochs 1 \\\r\n--n_tpu_cores 8 \\\r\n--tpu_cores 8\r\n```\r\nAnd modified the args in the shell snip when we invoke the python finetune.py (removed fp16 and gpus to 0)\r\n\r\n\r\n```\r\nAttempted to call `variable.set_data(tensor)`, but `variable` and `tensor` have incompatible tensor type.\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py\", line 119, in _start_fn\r\n fn(gindex, *args)\r\n File \"/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py\", line 222, in tpu_train\r\n self.run_pretrain_routine(model)\r\n File \"/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py\", line 1196, in run_pretrain_routine\r\n False)\r\n File \"/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py\", line 293, in _evaluate\r\n output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)\r\n File \"/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py\", line 470, in evaluation_forward\r\n output = model.validation_step(*args)\r\n File \"finetune.py\", line 145, in validation_step\r\n return self._generative_step(batch)\r\n File \"finetune.py\", line 176, in _generative_step\r\n decoder_start_token_id=self.decoder_start_token_id,\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py\", line 15, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py\", line 248, in generate\r\n if self.get_output_embeddings() is None:\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_bart.py\", line 1113, in get_output_embeddings\r\n return _make_linear_from_emb(self.model.shared) # make it on the fly\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_bart.py\", line 190, in _make_linear_from_emb\r\n lin_layer.weight.data = emb.weight.data\r\n```",
"Makes sense. You could try to instantiate\r\n```python\r\nself.lm_head = _make_linear_from_emb(self.model.shared) \r\n```\r\nin BartForConditionalGeneration.__init__ \r\nand then have `get_output_embeddings` return `self.lm_head`.\r\n",
"Alright! Thanks; Will keep you posted 🎉;\r\n\r\nEdit -1:\r\n\r\nSo things seems to work after i make the changes you said. The trouble is that training seems to be frozen on TPUs.",
"`emb.weight.data` is...\r\n```\r\ntensor([[-0.0370, 0.1117, 0.1829, ..., 0.2054, 0.0578, -0.0750],\r\n [ 0.0055, -0.0049, -0.0069, ..., -0.0030, 0.0038, 0.0087],\r\n [-0.0448, 0.4604, -0.0604, ..., 0.1073, 0.0310, 0.0477],\r\n ...,\r\n [-0.0138, 0.0278, -0.0467, ..., 0.0455, -0.0265, 0.0125],\r\n [-0.0043, 0.0153, -0.0567, ..., 0.0496, 0.0108, -0.0099],\r\n [ 0.0053, 0.0324, -0.0179, ..., -0.0085, 0.0223, -0.0020]],\r\n device='xla:1')\r\n```\r\n...and `lin_layer.weight.data` is...\r\n```\r\ntensor([[-1.0449e-03, 4.0973e-03, -9.7727e-04, ..., 8.2363e-04,\r\n -3.2153e-03, 3.5317e-03],\r\n [ 2.3644e-03, 3.5527e-03, -1.2428e-03, ..., -1.0983e-04,\r\n -2.1916e-03, 5.3099e-05],\r\n [-4.2492e-03, 3.8183e-04, 3.2527e-03, ..., -4.4359e-03,\r\n 7.6555e-04, -4.1728e-03],\r\n ...,\r\n [-4.3412e-03, 2.8537e-03, 7.9720e-04, ..., 2.9499e-03,\r\n 2.6357e-03, -3.5283e-03],\r\n [ 3.7042e-03, -3.0546e-03, 3.9206e-03, ..., -2.3771e-03,\r\n 4.3551e-03, 1.1703e-04],\r\n [ 3.5616e-03, -3.1224e-03, 1.3898e-03, ..., -2.1096e-05,\r\n 5.4077e-04, 1.6183e-03]])\r\n```\r\n... and you try to do...\r\n```\r\nlin_layer.weight.data = emb.weight.data\r\n```\r\n\r\nIsn't the problem that they are on different devices?",
"We can remove the thing which does it on fly(refer to sshleifer comment above). That won't work in case of TPUs.",
"Good to know. Once you get everything working it would be great to have all the required changes consolidated into one PR.",
"I am not quite sure that lightning does this optimization when we use multiple TPU cores (only available at the nightly-xla's). Refer [here](https://github.com/pytorch/xla/issues/1870#issuecomment-623603323).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | CONTRIBUTOR | null | - [ ] test the code on tpu
- [ ] if it doesn't work well: change code as little as possible to get it working.
- [ ] add a script/command that works | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5895/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5895/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5894 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5894/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5894/comments | https://api.github.com/repos/huggingface/transformers/issues/5894/events | https://github.com/huggingface/transformers/pull/5894 | 661,070,952 | MDExOlB1bGxSZXF1ZXN0NDUyNzc2MDAw | 5,894 | fix typo in training_args_tf.py | {
"login": "adelevie",
"id": 86790,
"node_id": "MDQ6VXNlcjg2Nzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/86790?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adelevie",
"html_url": "https://github.com/adelevie",
"followers_url": "https://api.github.com/users/adelevie/followers",
"following_url": "https://api.github.com/users/adelevie/following{/other_user}",
"gists_url": "https://api.github.com/users/adelevie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adelevie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adelevie/subscriptions",
"organizations_url": "https://api.github.com/users/adelevie/orgs",
"repos_url": "https://api.github.com/users/adelevie/repos",
"events_url": "https://api.github.com/users/adelevie/events{/privacy}",
"received_events_url": "https://api.github.com/users/adelevie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5894?src=pr&el=h1) Report\n> Merging [#5894](https://codecov.io/gh/huggingface/transformers/pull/5894?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/09a2f40684f77e62d0fd8485fe9d2d610390453f&el=desc) will **decrease** coverage by `1.19%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5894?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5894 +/- ##\n==========================================\n- Coverage 78.49% 77.30% -1.20% \n==========================================\n Files 146 146 \n Lines 26210 26210 \n==========================================\n- Hits 20573 20261 -312 \n- Misses 5637 5949 +312 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5894?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/training\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5894/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `47.45% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5894/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5894/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5894/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5894?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5894?src=pr&el=footer). Last update [09a2f40...b9754e4](https://codecov.io/gh/huggingface/transformers/pull/5894?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5894/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5894",
"html_url": "https://github.com/huggingface/transformers/pull/5894",
"diff_url": "https://github.com/huggingface/transformers/pull/5894.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5894.patch",
"merged_at": 1595231302000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5893 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5893/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5893/comments | https://api.github.com/repos/huggingface/transformers/issues/5893/events | https://github.com/huggingface/transformers/pull/5893 | 661,070,918 | MDExOlB1bGxSZXF1ZXN0NDUyNzc1OTY3 | 5,893 | fix typo in training_args.py | {
"login": "adelevie",
"id": 86790,
"node_id": "MDQ6VXNlcjg2Nzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/86790?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adelevie",
"html_url": "https://github.com/adelevie",
"followers_url": "https://api.github.com/users/adelevie/followers",
"following_url": "https://api.github.com/users/adelevie/following{/other_user}",
"gists_url": "https://api.github.com/users/adelevie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adelevie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adelevie/subscriptions",
"organizations_url": "https://api.github.com/users/adelevie/orgs",
"repos_url": "https://api.github.com/users/adelevie/repos",
"events_url": "https://api.github.com/users/adelevie/events{/privacy}",
"received_events_url": "https://api.github.com/users/adelevie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5893?src=pr&el=h1) Report\n> Merging [#5893](https://codecov.io/gh/huggingface/transformers/pull/5893?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/09a2f40684f77e62d0fd8485fe9d2d610390453f&el=desc) will **increase** coverage by `0.17%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5893?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5893 +/- ##\n==========================================\n+ Coverage 78.49% 78.66% +0.17% \n==========================================\n Files 146 146 \n Lines 26210 26210 \n==========================================\n+ Hits 20573 20619 +46 \n+ Misses 5637 5591 -46 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5893?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.55% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5893?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5893?src=pr&el=footer). Last update [09a2f40...762ff74](https://codecov.io/gh/huggingface/transformers/pull/5893?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5893/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5893",
"html_url": "https://github.com/huggingface/transformers/pull/5893",
"diff_url": "https://github.com/huggingface/transformers/pull/5893.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5893.patch",
"merged_at": 1595231583000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5892 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5892/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5892/comments | https://api.github.com/repos/huggingface/transformers/issues/5892/events | https://github.com/huggingface/transformers/issues/5892 | 661,009,767 | MDU6SXNzdWU2NjEwMDk3Njc= | 5,892 | Benchmark: traceback does not describe real problem | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Working command after MbartConfig change:\r\n```bash\r\nexport d=mbart_benchmark_data\r\npython examples/benchmarking/run_benchmark.py \\\r\n --models facebook/mbart-large-en-ro \\\r\n --log_filename $d/log.txt \\\r\n --inference_memory_csv \\ \r\n $d/inference_memory.csv \\ \r\n --train_memory_csv $d/train_memory.csv \\\r\n --train_time_csv $d/train_time.csv \\\r\n --inference_time_csv $d/inference_time.csv \\\r\n --fp16 --log_print --training --save_to_csv \\\r\n --batch_sizes 4 8 12 16\r\n```\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | CONTRIBUTOR | null | To reproduce:
- execute a command that results in 'N/A'
- long traceback describes tuple unpacking. The real problem is what caused the N/A and is far above in a small logger.error statement.
Example Command:
```bash
python examples/benchmarking/run_benchmark.py --models facebook/mbart-large-en-ro
```
Traceback:
```python
1 / 1
ERROR:transformers.benchmark.benchmark_utils:<class 'transformers.configuration_bart.MBartConfig'>
<class 'transformers.configuration_bart.MBartConfig'>
Traceback (most recent call last):
File "examples/benchmarking/run_benchmark.py", line 29, in <module>
main()
File "examples/benchmarking/run_benchmark.py", line 25, in main
benchmark.run()
File "/home/shleifer/transformers_fork/src/transformers/benchmark/benchmark_utils.py", line 665, in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
ValueError: too many values to unpack (expected 2)
```
This may not be worth a fix, and probably won't be fixed soon, but posting for others who might run into this or have ideas for a fix. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5892/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5891 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5891/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5891/comments | https://api.github.com/repos/huggingface/transformers/issues/5891/events | https://github.com/huggingface/transformers/pull/5891 | 661,001,535 | MDExOlB1bGxSZXF1ZXN0NDUyNzEzNDg0 | 5,891 | [WIP] Fix mbart benchmark | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"fixed indirectly by #6441 "
] | 1,595 | 1,597 | 1,597 | CONTRIBUTOR | null | - [ ] undo benchmarking changes
- [ ] publish working command + results | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5891/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5891",
"html_url": "https://github.com/huggingface/transformers/pull/5891",
"diff_url": "https://github.com/huggingface/transformers/pull/5891.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5891.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5890 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5890/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5890/comments | https://api.github.com/repos/huggingface/transformers/issues/5890/events | https://github.com/huggingface/transformers/issues/5890 | 660,881,257 | MDU6SXNzdWU2NjA4ODEyNTc= | 5,890 | How to finetune distillbart from distilbart-cnn-12-6 checkpoint using cnn_daily mail or gigawords dataset? | {
"login": "Hildweig",
"id": 34550304,
"node_id": "MDQ6VXNlcjM0NTUwMzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/34550304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hildweig",
"html_url": "https://github.com/Hildweig",
"followers_url": "https://api.github.com/users/Hildweig/followers",
"following_url": "https://api.github.com/users/Hildweig/following{/other_user}",
"gists_url": "https://api.github.com/users/Hildweig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hildweig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hildweig/subscriptions",
"organizations_url": "https://api.github.com/users/Hildweig/orgs",
"repos_url": "https://api.github.com/users/Hildweig/repos",
"events_url": "https://api.github.com/users/Hildweig/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hildweig/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | I would like to ask about how to finetune distillbart on gigaword and cnn dailymail with the starting checkpoint distilbart-cnn-12-6.
I did use the gigaword dataset provided by tensorflow but it replaces numbers by this character: "#", as a result, my summaries have # instead of numbers, is it normal that it has those # ?
Also is it really possible to finetune distillbart from the checkpoint distilbart-cnn-12-6 with cnn daily mail?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5890/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5889 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5889/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5889/comments | https://api.github.com/repos/huggingface/transformers/issues/5889/events | https://github.com/huggingface/transformers/issues/5889 | 660,658,402 | MDU6SXNzdWU2NjA2NTg0MDI= | 5,889 | AttributeError: 'GPT2LMHeadModel' object has no attribute 'h' | {
"login": "Heiheiyo",
"id": 15426714,
"node_id": "MDQ6VXNlcjE1NDI2NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/15426714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Heiheiyo",
"html_url": "https://github.com/Heiheiyo",
"followers_url": "https://api.github.com/users/Heiheiyo/followers",
"following_url": "https://api.github.com/users/Heiheiyo/following{/other_user}",
"gists_url": "https://api.github.com/users/Heiheiyo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Heiheiyo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Heiheiyo/subscriptions",
"organizations_url": "https://api.github.com/users/Heiheiyo/orgs",
"repos_url": "https://api.github.com/users/Heiheiyo/repos",
"events_url": "https://api.github.com/users/Heiheiyo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Heiheiyo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You can't load directly a TF checkpoint if it's not been generated by the TensorFlow side of the library (which can't be the case here since you're using a TF1 checkpoint and the lib requires TF2.0). You probably have to use the [conversion script](https://huggingface.co/transformers/converting_tensorflow_models.html).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,600 | 1,600 | NONE | null | # ❓ Questions & Help
when I run run_generation.py(I use TF1.14 fine tuned gpt2),the program is broken with the error message AttributeError: 'GPT2LMHeadModel' object has no attribute 'h'
Environment:
WIN10
pytorch 1.3.1
python 3.6
Part of the code has been modified

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5889/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5888 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5888/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5888/comments | https://api.github.com/repos/huggingface/transformers/issues/5888/events | https://github.com/huggingface/transformers/issues/5888 | 660,554,698 | MDU6SXNzdWU2NjA1NTQ2OTg= | 5,888 | Reading transformer package from local codes and NOT the pip installed version | {
"login": "vikas95",
"id": 25675079,
"node_id": "MDQ6VXNlcjI1Njc1MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25675079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vikas95",
"html_url": "https://github.com/vikas95",
"followers_url": "https://api.github.com/users/vikas95/followers",
"following_url": "https://api.github.com/users/vikas95/following{/other_user}",
"gists_url": "https://api.github.com/users/vikas95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vikas95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikas95/subscriptions",
"organizations_url": "https://api.github.com/users/vikas95/orgs",
"repos_url": "https://api.github.com/users/vikas95/repos",
"events_url": "https://api.github.com/users/vikas95/events{/privacy}",
"received_events_url": "https://api.github.com/users/vikas95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"How did you install the package ?\r\n\r\nAccording to the [README](https://github.com/huggingface/transformers#from-source) you should do :\r\n\r\n```console\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\npip install -e .\r\n```\r\n",
"Hi ,\r\n\r\nApologies for raising this issue, the error was something specific to singularity images that I have been using to run transformers on HPC. \r\nPlease pardon this issue. \r\n\r\nAnyway thanks @Colanim. I knew the steps, but the error was from somewhere else. \r\n"
] | 1,595 | 1,595 | 1,595 | NONE | null | Hi @sshleifer ,
There seem to be a problem of python package structure. I am getting an error similar to a previous issue (https://github.com/huggingface/transformers/issues/5303) but with the token_classification/run_ner.py file.
File "examples/token-classification/run_ner.py", line 42, in
from modeling_auto import AutoModelForTokenClassification
File "transformers/src/transformers/modeling_auto.py", line 22, in
from .configuration_auto import (
ImportError: attempted relative import with no known parent package
I have not installed transformers library using pip because I want to use the local codes (cloned from transformers library). After reading various stackoverflow suggestions (https://stackoverflow.com/questions/16981921/relative-imports-in-python-3 and https://napuzba.com/a/import-error-relative-no-parent), I believe that when I am importing the transformer package locally from my own directory, then it is not able read+load transformer as a package.
I am using python3.7
Can you please suggest how to read transformer as a package from local codes.
Thanks... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5888/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5887 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5887/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5887/comments | https://api.github.com/repos/huggingface/transformers/issues/5887/events | https://github.com/huggingface/transformers/issues/5887 | 660,488,939 | MDU6SXNzdWU2NjA0ODg5Mzk= | 5,887 | Seq2Seq: same MultiGPU test failing twice! | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"Spent an hour testing the new version on real data:\r\n\r\n(1) `./train_mbart_enro_multigpu` works!\r\n(2) `./train_mbart_enro_multigpu --logger_name wandb` ~doesn't work~ works.\r\n(3) trainer.test seems to try to call `main` again, and then runs into an \"output directory\" already exists error.\r\n\r\nIt hangs for me with versions:\r\n```bash\r\npytorch-lightning==0.8.5\r\ntorch==1.5.1+cu101\r\n```\r\n\r\ncc @nateraw @williamFalcon ",
"@stas00 do you think multi-gpu test coverage (at least on machines with multiple GPU, not CI yet) is possible?",
"> @stas00 do you think multi-gpu test coverage (at least on machines with multiple GPU, not CI yet) is possible?\r\n\r\nAre you referring to `pytest --cov`? Yes, of course, it should work.\r\n\r\nThe main issue with multiprocessing and coverage is a potential unregistered coverage if the sub-process hasn't finished its work right away and is hanging around. But we already have this issue in the benchmarking tests - I need to go back and pound at it some more.",
"Sorry, I'm referring to getting that test to pass.\r\nAt the moment it fails (twice!).",
"Oh, I see what you meant. Let me investigate.",
"Yeah, multigpu is a problem. Here is a totally different manifestation of it - hanging and other subtests failing:\r\n\r\n```\r\npytest --disable-warnings examples/seq2seq/test_seq2seq_examples.py::TestSummarizationDistiller\r\n――――――――――――――――――――――――――――――――――――――――― TestSummarizationDistiller.test_distill_checkpointing_with_teacher ―――――――――――――――――――――――――――――――――――――――――\r\n\r\nself = <seq2seq.test_seq2seq_examples.TestSummarizationDistiller testMethod=test_distill_checkpointing_with_teacher>\r\n\r\n def test_distill_checkpointing_with_teacher(self):\r\n updates = dict(\r\n student_encoder_layers=2,\r\n student_decoder_layers=1,\r\n max_epochs=4,\r\n val_check_interval=0.25,\r\n alpha_hid=2.0,\r\n model_name_or_path=\"IGNORE_THIS_IT_DOESNT_GET_USED\",\r\n )\r\n model = self._test_distiller_cli(updates, check_contents=False)\r\n \r\n ckpts = list(Path(model.output_dir).glob(\"*.ckpt\"))\r\n> self.assertEqual(1, len(ckpts))\r\nE AssertionError: 1 != 0\r\n\r\nexamples/seq2seq/test_seq2seq_examples.py:173: AssertionError\r\n\r\n examples/seq2seq/test_seq2seq_examples.py::TestSummarizationDistiller.test_distill_checkpointing_with_teacher ⨯ 14% █▌ using module BartTranslationDistiller\r\nEpoch 2: 100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 11.53it/s, loss=2.970, v_num=19]\r\n \r\n\r\n――――――――――――――――――――――――――――――――――――――――――――――――――― TestSummarizationDistiller.test_distill_mbart ――――――――――――――――――――――――――――――――――――――――――――――――――――\r\n\r\nself = <seq2seq.test_seq2seq_examples.TestSummarizationDistiller testMethod=test_distill_mbart>\r\n\r\n def test_distill_mbart(self):\r\n updates = dict(\r\n student_encoder_layers=2,\r\n student_decoder_layers=1,\r\n num_train_epochs=4,\r\n val_check_interval=0.25,\r\n alpha_hid=2.0,\r\n task=\"translation\",\r\n model_name_or_path=\"IGNORE_THIS_IT_DOESNT_GET_USED\",\r\n tokenizer_name=MBART_TINY,\r\n teacher=MBART_TINY,\r\n src_lang=\"en_XX\",\r\n tgt_lang=\"ro_RO\",\r\n )\r\n model = self._test_distiller_cli(updates, check_contents=False)\r\n assert model.model.config.model_type == \"mbart\"\r\n \r\n ckpts = list(Path(model.output_dir).glob(\"*.ckpt\"))\r\n> self.assertEqual(1, len(ckpts))\r\nE AssertionError: 1 != 0\r\n\r\nexamples/seq2seq/test_seq2seq_examples.py:224: AssertionError\r\n\r\n examples/seq2seq/test_seq2seq_examples.py::TestSummarizationDistiller.test_distill_mbart ⨯ 29% ██▉ using module SummarizationModule\r\nEpoch 2: 100%|███████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 4.23it/s, loss=10.785, v_num=19]\r\n \r\n\r\n――――――――――――――――――――――――――――――――――――――――――――――――― TestSummarizationDistiller.test_distill_no_teacher ―――――――――――――――――――――――――――――――――――――――――――――――――\r\n\r\nself = <seq2seq.test_seq2seq_examples.TestSummarizationDistiller testMethod=test_distill_no_teacher>\r\n\r\n def test_distill_no_teacher(self):\r\n updates = dict(student_encoder_layers=2, student_decoder_layers=1, no_teacher=True)\r\n> self._test_distiller_cli(updates)\r\n\r\nexamples/seq2seq/test_seq2seq_examples.py:159: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <seq2seq.test_seq2seq_examples.TestSummarizationDistiller testMethod=test_distill_no_teacher>\r\nupdates = {'no_teacher': True, 'student_decoder_layers': 1, 'student_encoder_layers': 2}, check_contents = True\r\n\r\n def _test_distiller_cli(self, updates, check_contents=True):\r\n default_updates = dict(\r\n label_smoothing=0.0,\r\n early_stopping_patience=-1,\r\n train_batch_size=1,\r\n eval_batch_size=2,\r\n max_epochs=2,\r\n alpha_mlm=0.2,\r\n alpha_ce=0.8,\r\n do_predict=True,\r\n model_name_or_path=\"sshleifer/tinier_bart\",\r\n teacher=CHEAP_ARGS[\"model_name_or_path\"],\r\n val_check_interval=0.5,\r\n alpha_encoder_loss=0.4,\r\n )\r\n default_updates.update(updates)\r\n args_d: dict = CHEAP_ARGS.copy()\r\n tmp_dir = make_test_data_dir()\r\n output_dir = tempfile.mkdtemp(prefix=\"output_\")\r\n\r\n args_d.update(data_dir=tmp_dir, output_dir=output_dir, **default_updates)\r\n model = distill_main(argparse.Namespace(**args_d))\r\n if not check_contents:\r\n return model\r\n contents = os.listdir(output_dir)\r\n contents = {os.path.basename(p) for p in contents}\r\n ckpt_files = [p for p in contents if p.endswith(\"ckpt\")]\r\n> assert len(ckpt_files) > 0\r\nE AssertionError: assert 0 > 0\r\nE + where 0 = len([])\r\n\r\nexamples/seq2seq/test_seq2seq_examples.py:271: AssertionError\r\n\r\n examples/seq2seq/test_seq2seq_examples.py::TestSummarizationDistiller.test_distill_no_teacher ⨯ 43% ████▍\r\n examples/seq2seq/test_seq2seq_examples.py::TestSummarizationDistiller.test_distill_t5 s 57% █████▊\r\n ../../../../../home/stas/anaconda3/envs/main-38/lib/python3.8/unittest/case.py::TestSummarizationDistiller.test_hub_configs s 71% ███████▎\r\n examples/seq2seq/test_seq2seq_examples.py::TestSummarizationDistiller.test_loss_fn ✓ 86% ████████▋\r\n examples/seq2seq/test_seq2seq_examples.py::TestSummarizationDistiller.test_multigpu s 100% ██████████\r\n============================================================== short test summary info ===============================================================\r\nFAILED examples/seq2seq/test_seq2seq_examples.py::TestSummarizationDistiller::test_distill_checkpointing_with_teacher - AssertionError: 1 != 0\r\nFAILED examples/seq2seq/test_seq2seq_examples.py::TestSummarizationDistiller::test_distill_mbart - AssertionError: 1 != 0\r\nFAILED examples/seq2seq/test_seq2seq_examples.py::TestSummarizationDistiller::test_distill_no_teacher - AssertionError: assert 0 > 0\r\n\r\nResults (18.02s):\r\n 1 passed\r\n 3 failed\r\n - examples/seq2seq/test_seq2seq_examples.py:161 TestSummarizationDistiller.test_distill_checkpointing_with_teacher\r\n - examples/seq2seq/test_seq2seq_examples.py:206 TestSummarizationDistiller.test_distill_mbart\r\n - examples/seq2seq/test_seq2seq_examples.py:157 TestSummarizationDistiller.test_distill_no_teacher\r\n 3 skipped\r\n```\r\nand it hangs. getting a trace:\r\n```\r\nThread 0x00007fc81efd6700 (most recent call first):\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/threading.py\", line 306 in wait\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/queue.py\", line 179 in get\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorboard/summary/writer/event_file_writer.py\", line 232 in run\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/threading.py\", line 932 in _bootstrap_inner\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/threading.py\", line 890 in _bootstrap\r\n\r\nThread 0x00007fc80e850700 (most recent call first):\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/threading.py\", line 306 in wait\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/queue.py\", line 179 in get\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorboard/summary/writer/event_file_writer.py\", line 232 in run\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/threading.py\", line 932 in _bootstrap_inner\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/threading.py\", line 890 in _bootstrap\r\n\r\nThread 0x00007fc8248b0700 (most recent call first):\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/threading.py\", line 306 in wait\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/queue.py\", line 179 in get\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorboard/summary/writer/event_file_writer.py\", line 232 in run\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/threading.py\", line 932 in _bootstrap_inner\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/threading.py\", line 890 in _bootstrap\r\n\r\nThread 0x00007fc8250b1700 (most recent call first):\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/threading.py\", line 306 in wait\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/queue.py\", line 179 in get\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorboard/summary/writer/event_file_writer.py\", line 232 in run\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/threading.py\", line 932 in _bootstrap_inner\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/threading.py\", line 890 in _bootstrap\r\n\r\nThread 0x00007fc81cbec700 (most recent call first):\r\n<no Python frame>\r\n\r\nThread 0x00007fc81e7d5700 (most recent call first):\r\n<no Python frame>\r\n\r\nThread 0x00007fc826ffd700 (most recent call first):\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/threading.py\", line 306 in wait\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/queue.py\", line 179 in get\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tensorboard/summary/writer/event_file_writer.py\", line 232 in run\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/threading.py\", line 932 in _bootstrap_inner\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/threading.py\", line 890 in _bootstrap\r\n\r\nCurrent thread 0x00007fc91d2e2740 (most recent call first):\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/distributed/rendezvous.py\", line 172 in _env_rendezvous_handler\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py\", line 423 in init_process_group\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py\", line 973 in init_ddp_connection\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py\", line 506 in ddp_train\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py\", line 462 in spawn_ddp_children\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py\", line 992 in fit\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/examples/lightning_base.py\", line 380 in generic_train\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/examples/seq2seq/finetune.py\", line 420 in main\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/examples/seq2seq/distillation.py\", line 511 in distill_main\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/examples/seq2seq/test_seq2seq_examples.py\", line 267 in _test_distiller_cli\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/examples/seq2seq/test_seq2seq_examples.py\", line 157 in test_multigpu\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/unittest/case.py\", line 633 in _callTestMethod\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/unittest/case.py\", line 676 in run\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/unittest/case.py\", line 736 in __call__\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/unittest.py\", line 278 in runtest\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/runner.py\", line 153 in pytest_runtest_call\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/callers.py\", line 187 in _multicall\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/manager.py\", line 84 in <lambda>\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/manager.py\", line 93 in _hookexec\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/hooks.py\", line 286 in __call__\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/runner.py\", line 247 in <lambda>\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/runner.py\", line 294 in from_call\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/runner.py\", line 246 in call_runtest_hook\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/runner.py\", line 207 in call_and_report\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/runner.py\", line 117 in runtestprotocol\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/runner.py\", line 100 in pytest_runtest_protocol\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/callers.py\", line 187 in _multicall\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/manager.py\", line 84 in <lambda>\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/manager.py\", line 93 in _hookexec\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/hooks.py\", line 286 in __call__\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/main.py\", line 321 in pytest_runtestloop\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/callers.py\", line 187 in _multicall\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/manager.py\", line 84 in <lambda>\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/manager.py\", line 93 in _hookexec\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/hooks.py\", line 286 in __call__\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/main.py\", line 296 in _main\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/main.py\", line 240 in wrap_session\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/main.py\", line 289 in pytest_cmdline_main\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/callers.py\", line 187 in _multicall\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/manager.py\", line 84 in <lambda>\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/manager.py\", line 93 in _hookexec\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/hooks.py\", line 286 in __call__\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/config/__init__.py\", line 157 in main\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/config/__init__.py\", line 180 in console_main\r\n File \"/home/stas/anaconda3/envs/main-38/bin/pytest\", line 8 in <module>\r\n```\r\n\r\nIf I run just these 3 tests - all is green. So its the multigpu test.\r\n\r\nAh, it somehow starts running pytest second time half-way through.\r\n\r\nRechecked with py37 and py38 - same, same.\r\n\r\np.s. checkout `pip install pytest-sugar` - makes running tests a brighter experience.",
"I will try to make a small reproducible test where the problem still happens, and then we can try to sort it out.",
"OK, so first there is some strange issue with the properties - seems to be related to https://github.com/pytorch/pytorch/issues/43542, the multigpu test fails with:\r\n```\r\n\r\n―――――――――――――――――――――――――――――――――――――――――――――――――――――― TestSummarizationDistiller.test_multigpu ――――――――――――――――――――――――――――――――――――――――――――――――――――――\r\n\r\nself = <seq2seq.test_seq2seq_examples.TestSummarizationDistiller testMethod=test_multigpu>\r\n\r\n @require_multigpu\r\n def test_multigpu(self):\r\n updates = dict(\r\n no_teacher=True,\r\n freeze_encoder=True,\r\n gpus=2,\r\n sortish_sampler=True,\r\n )\r\n> self._test_distiller_cli(updates, check_contents=False)\r\n\r\nexamples/seq2seq/test_seq2seq_examples.py:154: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\nexamples/seq2seq/test_seq2seq_examples.py:264: in _test_distiller_cli\r\n model = distill_main(argparse.Namespace(**args_d))\r\nexamples/seq2seq/distillation.py:495: in distill_main\r\n return ft_main(args, model=model)\r\nexamples/seq2seq/finetune.py:399: in main\r\n trainer: pl.Trainer = generic_train(\r\nexamples/lightning_base.py:380: in generic_train\r\n trainer.fit(model)\r\n../../github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/trainer.py:301: in fit\r\n results = self.accelerator_backend.train()\r\n../../github/00pytorch/pytorch-lightning/pytorch_lightning/accelerators/ddp_backend.py:133: in train\r\n self.ddp_train_tmp(process_idx=self.task_idx, mp_queue=None, model=model)\r\n../../github/00pytorch/pytorch-lightning/pytorch_lightning/accelerators/ddp_base_backend.py:148: in ddp_train_tmp\r\n optimizers, lr_schedulers, optimizer_frequencies = self.trainer.init_optimizers(model)\r\n../../github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/optimizers.py:32: in init_optimizers\r\n optim_conf = model.configure_optimizers()\r\nexamples/lightning_base.py:152: in configure_optimizers\r\n scheduler = self.get_lr_scheduler()\r\nexamples/lightning_base.py:122: in get_lr_scheduler\r\n self.opt, num_warmup_steps=self.hparams.warmup_steps, num_training_steps=self.total_steps\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = SummarizationModule(\r\n (model): BartForConditionalGeneration(\r\n (model): BartModel(\r\n (shared): Embedding(50265,... )\r\n )\r\n (layernorm_embedding): LayerNorm((24,), eps=1e-05, elementwise_affine=True)\r\n )\r\n )\r\n )\r\n)\r\nname = 'total_steps'\r\n\r\n def __getattr__(self, name: str) -> Union[Tensor, 'Module']:\r\n if '_parameters' in self.__dict__:\r\n _parameters = self.__dict__['_parameters']\r\n if name in _parameters:\r\n return _parameters[name]\r\n if '_buffers' in self.__dict__:\r\n _buffers = self.__dict__['_buffers']\r\n if name in _buffers:\r\n return _buffers[name]\r\n if '_modules' in self.__dict__:\r\n modules = self.__dict__['_modules']\r\n if name in modules:\r\n return modules[name]\r\n> raise ModuleAttributeError(\"'{}' object has no attribute '{}'\".format(\r\n type(self).__name__, name))\r\nE torch.nn.modules.module.ModuleAttributeError: 'SummarizationModule' object has no attribute 'total_steps'\r\n```\r\nI fixed it with:\r\n```\r\ndiff --git a/examples/lightning_base.py b/examples/lightning_base.py\r\nindex e7c41a3e..f5e858c7 100644\r\n--- a/examples/lightning_base.py\r\n+++ b/examples/lightning_base.py\r\n@@ -119,8 +119,7 @@ class BaseTransformer(pl.LightningModule):\r\n def get_lr_scheduler(self):\r\n get_schedule_func = arg_to_scheduler[self.hparams.lr_scheduler]\r\n scheduler = get_schedule_func(\r\n- self.opt, num_warmup_steps=self.hparams.warmup_steps, num_training_steps=self.total_steps\r\n- )\r\n+ self.opt, num_warmup_steps=self.hparams.warmup_steps, num_training_steps=self.total_steps())\r\n scheduler = {\"scheduler\": scheduler, \"interval\": \"step\", \"frequency\": 1}\r\n return scheduler\r\n\r\n@@ -159,12 +158,11 @@ class BaseTransformer(pl.LightningModule):\r\n def test_epoch_end(self, outputs):\r\n return self.validation_end(outputs)\r\n\r\n- @property\r\n def total_steps(self) -> int:\r\n \"\"\"The number of total training steps that will be run. Used for lr scheduler purposes.\"\"\"\r\n num_devices = max(1, self.hparams.gpus) # TODO: consider num_tpu_cores\r\n effective_batch_size = self.hparams.train_batch_size * self.hparams.accumulate_grad_batches * num_devices\r\n- dataset_size = len(self.train_loader.dataset)\r\n+ dataset_size = len(self.train_dataloader().dataset)\r\n return (dataset_size / effective_batch_size) * self.hparams.max_epochs\r\n\r\n def setup(self, mode):\r\n```\r\n\r\nit works fine the first time, but when it re-reruns itself, it behaves differently - so the above fix was needed.",
"WIP: https://github.com/huggingface/transformers/pull/7281\r\n"
] | 1,595 | 1,603 | 1,603 | CONTRIBUTOR | null | This is not run by CI, but if you go on a machine with multiple GPU, you will see an interesting unittest failure:
```bash
pytest examples -k multigpu
```
```
===================================== FAILURES ======================================
_____________________ TestSummarizationDistiller.test_multigpu ______________________
self = <seq2seq.test_seq2seq_examples.TestSummarizationDistiller testMethod=test_multigpu>
@require_multigpu
def test_multigpu(self):
updates = dict(no_teacher=True, freeze_encoder=True, gpus=2, sortish_sampler=False,)
> self._test_distiller_cli(updates)
examples/seq2seq/test_seq2seq_examples.py:117:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
examples/seq2seq/test_seq2seq_examples.py:182: in _test_distiller_cli
self.assertIn(ckpt_name, contents)
E AssertionError: 'val_avg_rouge2=0.0000-step_count=2.ckpt' not found in {'metrics.json', 'hparams.pk
l', 'git_log.json'}
============================== short test summary info ==============================
FAILED examples/seq2seq/test_seq2seq_examples.py::TestSummarizationDistiller::test_multigpu
========================= 1 failed, 25 deselected in 9.75s ==========================
...
'step_count': 4,
'test_avg_gen_time': 0.9470310211181641,
'test_avg_loss': 10.737613677978516,
'test_avg_rouge1': 0.0,
'test_avg_rouge2': 0.0,
'test_avg_rougeL': 0.0,
'test_avg_summ_len': 141.0,
'test_loss': tensor(10.7376, device='cuda:0'),
'test_rouge2': tensor(0., device='cuda:0')}
--------------------------------------------------------------------------------
Testing: 100%|█████████████████████████████████████████| 1/1 [00:01<00:00, 1.49s/it]
F
===================================== FAILURES ======================================
_____________________ TestSummarizationDistiller.test_multigpu ______________________
self = <seq2seq.test_seq2seq_examples.TestSummarizationDistiller testMethod=test_multigpu>
@require_multigpu
def test_multigpu(self):
updates = dict(no_teacher=True, freeze_encoder=True, gpus=2, sortish_sampler=False,)
> self._test_distiller_cli(updates)
examples/seq2seq/test_seq2seq_examples.py:117:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
examples/seq2seq/test_seq2seq_examples.py:193: in _test_distiller_cli
self.assertEqual(len(metrics["val"]), desired_n_evals)
E AssertionError: 3 != 5
============================== short test summary info ==============================
FAILED examples/seq2seq/test_seq2seq_examples.py::TestSummarizationDistiller::test_multigpu
========================= 1 failed, 25 deselected in 47.48s =========================
```
The same `SummarizationDistiller.test_multigpu` fails twice. I think we need to use `pl.Trainer.argparse.from_args` in the unittests.
How do you guys do it @nateraw ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5887/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5886 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5886/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5886/comments | https://api.github.com/repos/huggingface/transformers/issues/5886/events | https://github.com/huggingface/transformers/issues/5886 | 660,475,203 | MDU6SXNzdWU2NjA0NzUyMDM= | 5,886 | DataCollatorForLanguageModeling - Shift labels for left-to-right LM? | {
"login": "shtoshni",
"id": 14910924,
"node_id": "MDQ6VXNlcjE0OTEwOTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/14910924?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shtoshni",
"html_url": "https://github.com/shtoshni",
"followers_url": "https://api.github.com/users/shtoshni/followers",
"following_url": "https://api.github.com/users/shtoshni/following{/other_user}",
"gists_url": "https://api.github.com/users/shtoshni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shtoshni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shtoshni/subscriptions",
"organizations_url": "https://api.github.com/users/shtoshni/orgs",
"repos_url": "https://api.github.com/users/shtoshni/repos",
"events_url": "https://api.github.com/users/shtoshni/events{/privacy}",
"received_events_url": "https://api.github.com/users/shtoshni/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Actually never mind. At least GPT2 does the shifting internally. "
] | 1,595 | 1,595 | 1,595 | NONE | null | Hi,
Shouldn't the labels be shifted 1 step right in the data collation step at -https://github.com/huggingface/transformers/blob/09a2f40684f77e62d0fd8485fe9d2d610390453f/src/transformers/data/data_collator.py#L88
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5886/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5885 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5885/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5885/comments | https://api.github.com/repos/huggingface/transformers/issues/5885/events | https://github.com/huggingface/transformers/pull/5885 | 660,342,365 | MDExOlB1bGxSZXF1ZXN0NDUyMTA3NTAw | 5,885 | feat: allow prefix for any generative model | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
},
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5885?src=pr&el=h1) Report\n> Merging [#5885](https://codecov.io/gh/huggingface/transformers/pull/5885?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/42fddacd1cac3cc57c3326aa51a409f5090b1261?el=desc) will **decrease** coverage by `0.88%`.\n> The diff coverage is `75.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5885?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5885 +/- ##\n==========================================\n- Coverage 78.47% 77.58% -0.89% \n==========================================\n Files 157 157 \n Lines 28569 28571 +2 \n==========================================\n- Hits 22420 22168 -252 \n- Misses 6149 6403 +254 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5885?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `80.50% <75.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-79.30%)` | :arrow_down: |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/5885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.05% <0.00%> (-0.35%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5885/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (-0.28%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/5885/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5885?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5885?src=pr&el=footer). Last update [42fddac...00efd7d](https://codecov.io/gh/huggingface/transformers/pull/5885?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Another solution would be to just allow `prefix` element like in `TranslationPipeline`. The main difference will be that it becomes part of the display and it should not affect too much future plans such as suggested above with `padding_text`, `prefix` and `suffix`.\r\n\r\nI still think that `padding_text` as proposed here will be useful in the future to easily provide \"GPT-3 type\" widgets.",
"I updated my [example](https://gist.github.com/borisdayma/89aaf1587390340add44c3c7081bffcd) of the feature with the case when it loads model config.",
"Let me know if I can provide any additional information or make any changes",
"Just checking if you need anything else from me @julien-c",
"I’m off for a couple of days so I’ll let others chime in",
"Enjoy your time off!",
"@julien-c @mfuntowicz sorry I'm just pinging.\r\nI have a [demo](https://huggingface.co/huggingtweets/julien_c) I'm excited about and this feature just makes predictions so much better.\r\n*Note: not on master branch - dev version [here](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/feat-artifacts/huggingtweets-demo.ipynb)*",
"Thoughts @patrickvonplaten @sshleifer? (text generation pipeline option)",
"This PR adds the padding functionality from `run_generation` to pipelines (and allow for custom text).\r\n\r\nHowever the main idea would be to eventually allow \"GPT-3 style\" API's.\r\n\r\n\r\n\r\nIt could be renamed \"prefix\" but I suggest to make a distinction between padding and prefix where you would potentially want to display prefix & suffix in an interface (for example huggingface widget) but not necessarily the \"padding\".\r\n\r\nThis kind of API could be used for T5 and can let people create more modular inference examples.",
"@sshleifer I gave more thoughts to it and I agree we should just name it `prefix` and maybe add a \"deprecated warning\" on `run_generation`.\r\nIt should be sufficient for now and we will always be able to add a more complex behavior in the future if necessary.\r\n\r\nNote: I don't understand the CI errors, quality is good on my side and the other tests errors seem unrelated to this PR",
"style: you need to rebase and then `pip install black --upgrade`\r\nOther CI failures are spurious, you can ignore them.\r\n\r\nMy last request would be to add test coverage for the new argument in test_pipelines.py.\r\nsee:\r\n```\r\ntest_torch_text_generation\r\ntest_tf_text_generation\r\n```\r\n\r\n",
"Changes:\r\n\r\n* renamed `padding_text` to `prefix` in pipelines and `run_generation.py` (with warning of deprecated argument)\r\n* since PR was made, we cannot tokenize empty text anymore so we now check for empty \"prefix\"\r\n* since PR was made, we cannot pass extra args to `generate` so we explicitly add `prefix` to pipeline `__call__` method\r\n* added tests for pipeline using prefix\r\n\r\nNotes:\r\n\r\n* we set default `prefix` to `None` (vs `\"\"`) as we could pass `\"\"` to overwrite model default value.\r\n* sorry I should have done a merge instead of rebase as it changes the history. New commits start at \"feat: rename padding_text to prefix\"",
"Thanks @LysandreJik. Yes I noticed later that my results were better without the initial space as well.\r\n\r\nIntuitively it seemed to me like a waste of encoding differentiating between having a space or not (for example even after new lines) as the start of a document can simply be identified by the special token.\r\nI thought there were more examples without the space within the original training set which would help my fine-tuning, as long as I do the same both for training and inference.\r\nIn the end, I get my best results with just `<|endoftext|>` between each sample and stripping every extra space.",
"@LysandreJik let me know if anything else is required on my side.\r\nSince it's a relatively large PR it can be hard to remain in sync with master.",
"I think we can merge this: @LysandreJik I leave it to you to merge :-) ",
"+ @mfuntowicz can you update the transformers dependency in the inference API when this lands on master, so that @borisdayma can showcase his magic ;)",
"@mfuntowicz can you let me know when you update the inference API so I do some tests on the widget?",
"@borisdayma Everything has been deployed on the inference API, servers are up. \r\n\r\nThanks for taking care of this 💪 🙏 ",
"That was fast! Thanks it looks to be working great!"
] | 1,595 | 1,599 | 1,599 | CONTRIBUTOR | null | Allow `padding_text` for any generative model.
### Motivations
* some models require some text at the beginning such as GPT-2 which should always have a white space. GPT-2 model could now have `config.padding_text` set to `" "` to ensure this is done
* this argument adds more customization to huggingface inference widget (related issue #5553). For huggingtweets models I can set `model.config.task_specific_params['text-generation']['padding_text'] = '<|endoftext|> '`
See [example](https://gist.github.com/borisdayma/89aaf1587390340add44c3c7081bffcd) of use of `padding_text` to create negative/positive sentences.
*Side note*: this is not completely obvious that a leading space should always be added for GPT-2 as `" \n This"` is not tokenized the same as `" \nThis"` which GPT-2 may have seen more in its training data (unless white spaces are added after new lines as well)
### Future improvements
In the future, we should also add `prefix` and `suffix`. This could let us do things similar to GPT-3 with the inference widget and specific tasks:
* `padding_text` would be used to initialize tasks (ex: 10 examples conversation with a bot). This is always cropped out and never displayed in the output
* `prefix` would add required text (ex: `" Human: "`). This is displayed in output.
* `suffix` would add required text after prompt (ex: `" \n AI: "`). This is displayed in output. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5885/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5885",
"html_url": "https://github.com/huggingface/transformers/pull/5885",
"diff_url": "https://github.com/huggingface/transformers/pull/5885.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5885.patch",
"merged_at": 1599462226000
} |
https://api.github.com/repos/huggingface/transformers/issues/5884 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5884/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5884/comments | https://api.github.com/repos/huggingface/transformers/issues/5884/events | https://github.com/huggingface/transformers/pull/5884 | 660,284,725 | MDExOlB1bGxSZXF1ZXN0NDUyMDU0NzYx | 5,884 | Add ComVE model cards | {
"login": "AliOsm",
"id": 7662492,
"node_id": "MDQ6VXNlcjc2NjI0OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7662492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AliOsm",
"html_url": "https://github.com/AliOsm",
"followers_url": "https://api.github.com/users/AliOsm/followers",
"following_url": "https://api.github.com/users/AliOsm/following{/other_user}",
"gists_url": "https://api.github.com/users/AliOsm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AliOsm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AliOsm/subscriptions",
"organizations_url": "https://api.github.com/users/AliOsm/orgs",
"repos_url": "https://api.github.com/users/AliOsm/repos",
"events_url": "https://api.github.com/users/AliOsm/events{/privacy}",
"received_events_url": "https://api.github.com/users/AliOsm/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5884?src=pr&el=h1) Report\n> Merging [#5884](https://codecov.io/gh/huggingface/transformers/pull/5884?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4b506a37e3e0ff679235961ba14dd9397843ef3a&el=desc) will **increase** coverage by `1.39%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5884?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5884 +/- ##\n==========================================\n+ Coverage 77.27% 78.67% +1.39% \n==========================================\n Files 146 146 \n Lines 26210 26210 \n==========================================\n+ Hits 20254 20620 +366 \n+ Misses 5956 5590 -366 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5884?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5884?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5884?src=pr&el=footer). Last update [4b506a3...d67656b](https://codecov.io/gh/huggingface/transformers/pull/5884?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This is really cool, thanks for sharing"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Add `aliosm/ComVE*` model cards. These models were part of this work: https://sentic.net/transformer-models-for-commonsense-reasoning.pdf. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5884/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5884",
"html_url": "https://github.com/huggingface/transformers/pull/5884",
"diff_url": "https://github.com/huggingface/transformers/pull/5884.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5884.patch",
"merged_at": 1595350470000
} |
https://api.github.com/repos/huggingface/transformers/issues/5883 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5883/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5883/comments | https://api.github.com/repos/huggingface/transformers/issues/5883/events | https://github.com/huggingface/transformers/pull/5883 | 660,205,544 | MDExOlB1bGxSZXF1ZXN0NDUxOTgyMzIy | 5,883 | Xlnet outputs | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5883?src=pr&el=h1) Report\n> Merging [#5883](https://codecov.io/gh/huggingface/transformers/pull/5883?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3653d01f2af0389207f2239875a8ceae41bf0598&el=desc) will **increase** coverage by `1.36%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5883?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5883 +/- ##\n==========================================\n+ Coverage 77.26% 78.62% +1.36% \n==========================================\n Files 146 146 \n Lines 25948 25958 +10 \n==========================================\n+ Hits 20048 20409 +361 \n+ Misses 5900 5549 -351 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5883?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5883/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.52% <ø> (ø)` | |\n| [src/transformers/configuration\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5883/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `94.33% <ø> (ø)` | |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5883/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `81.75% <100.00%> (+0.26%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5883/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5883/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5883/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5883/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5883/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5883?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5883?src=pr&el=footer). Last update [3653d01...0c73c6d](https://codecov.io/gh/huggingface/transformers/pull/5883?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Did @LysandreJik @patrickvonplaten or @sgugger get a chance to review this before merging, @TevenLeScao?",
"@patrickvonplaten and @joeddav reviewed it (@LysandreJik was off), there's quite a bit of discussion in #5770 if you wanna take a look!"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Reopening #5770 again | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5883/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5883",
"html_url": "https://github.com/huggingface/transformers/pull/5883",
"diff_url": "https://github.com/huggingface/transformers/pull/5883.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5883.patch",
"merged_at": 1595086394000
} |
https://api.github.com/repos/huggingface/transformers/issues/5882 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5882/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5882/comments | https://api.github.com/repos/huggingface/transformers/issues/5882/events | https://github.com/huggingface/transformers/pull/5882 | 660,202,905 | MDExOlB1bGxSZXF1ZXN0NDUxOTc5OTY5 | 5,882 | Revert "Xlnet outputs" | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5882?src=pr&el=h1) Report\n> Merging [#5882](https://codecov.io/gh/huggingface/transformers/pull/5882?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/13be4872123094c37eb5fab939b38967b0ad2cd0&el=desc) will **increase** coverage by `0.21%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5882?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5882 +/- ##\n==========================================\n+ Coverage 78.27% 78.48% +0.21% \n==========================================\n Files 146 146 \n Lines 26210 26200 -10 \n==========================================\n+ Hits 20515 20563 +48 \n+ Misses 5695 5637 -58 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5882?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5882/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.52% <ø> (ø)` | |\n| [src/transformers/configuration\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5882/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `94.33% <ø> (ø)` | |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5882/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `81.49% <100.00%> (-0.27%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5882/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5882/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5882/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5882/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5882/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5882/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5882?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5882?src=pr&el=footer). Last update [13be487...1be083d](https://codecov.io/gh/huggingface/transformers/pull/5882?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | This seems to break a test that isn't a CI. Think I've fixed it, will revert and re-open. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5882/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5882",
"html_url": "https://github.com/huggingface/transformers/pull/5882",
"diff_url": "https://github.com/huggingface/transformers/pull/5882.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5882.patch",
"merged_at": 1595085341000
} |
https://api.github.com/repos/huggingface/transformers/issues/5881 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5881/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5881/comments | https://api.github.com/repos/huggingface/transformers/issues/5881/events | https://github.com/huggingface/transformers/pull/5881 | 660,187,710 | MDExOlB1bGxSZXF1ZXN0NDUxOTY2MTcz | 5,881 | Xlnet outputs | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5881?src=pr&el=h1) Report\n> Merging [#5881](https://codecov.io/gh/huggingface/transformers/pull/5881?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3653d01f2af0389207f2239875a8ceae41bf0598&el=desc) will **increase** coverage by `0.54%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5881?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5881 +/- ##\n==========================================\n+ Coverage 77.26% 77.81% +0.54% \n==========================================\n Files 146 146 \n Lines 25948 25958 +10 \n==========================================\n+ Hits 20048 20198 +150 \n+ Misses 5900 5760 -140 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5881?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.52% <ø> (ø)` | |\n| [src/transformers/configuration\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `94.33% <ø> (ø)` | |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `81.75% <100.00%> (+0.26%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5881/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5881?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5881?src=pr&el=footer). Last update [3653d01...cfd7c66](https://codecov.io/gh/huggingface/transformers/pull/5881?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Reopening #5770 since push failed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5881/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5881",
"html_url": "https://github.com/huggingface/transformers/pull/5881",
"diff_url": "https://github.com/huggingface/transformers/pull/5881.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5881.patch",
"merged_at": 1595084009000
} |
https://api.github.com/repos/huggingface/transformers/issues/5880 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5880/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5880/comments | https://api.github.com/repos/huggingface/transformers/issues/5880/events | https://github.com/huggingface/transformers/issues/5880 | 660,149,070 | MDU6SXNzdWU2NjAxNDkwNzA= | 5,880 | How to get probabilities from MarianMT models? | {
"login": "bes-dev",
"id": 3617413,
"node_id": "MDQ6VXNlcjM2MTc0MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3617413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bes-dev",
"html_url": "https://github.com/bes-dev",
"followers_url": "https://api.github.com/users/bes-dev/followers",
"following_url": "https://api.github.com/users/bes-dev/following{/other_user}",
"gists_url": "https://api.github.com/users/bes-dev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bes-dev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bes-dev/subscriptions",
"organizations_url": "https://api.github.com/users/bes-dev/orgs",
"repos_url": "https://api.github.com/users/bes-dev/repos",
"events_url": "https://api.github.com/users/bes-dev/events{/privacy}",
"received_events_url": "https://api.github.com/users/bes-dev/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | Hey,
I try to use MarianMT models for distillation, but I didn't found way to get raw probabilities from model.
I use model as in examples:
```
from transformers import MarianMTModel, MarianTokenizer
class Translator:
def __init__(self, src, dst):
model_name = f'Helsinki-NLP/opus-mt-{src}-{dst}'
self.tokenizer = MarianTokenizer.from_pretrained(model_name)
self.model = MarianMTModel.from_pretrained(model_name)
def __call__(self, text):
if not isinstance(text, list):
text = [text]
preds = self.model.generate(**self.tokenizer.prepare_translation_batch(text))
print(preds)
translation = [self.tokenizer.decode(t, skip_special_tokens=True) for t in preds]
return translation
```
How can I get translation and probs via Huggingface API?
Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5880/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5880/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5879 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5879/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5879/comments | https://api.github.com/repos/huggingface/transformers/issues/5879/events | https://github.com/huggingface/transformers/pull/5879 | 660,139,101 | MDExOlB1bGxSZXF1ZXN0NDUxOTIxODA1 | 5,879 | Create README.md | {
"login": "jannesgg",
"id": 36601086,
"node_id": "MDQ6VXNlcjM2NjAxMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/36601086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jannesgg",
"html_url": "https://github.com/jannesgg",
"followers_url": "https://api.github.com/users/jannesgg/followers",
"following_url": "https://api.github.com/users/jannesgg/following{/other_user}",
"gists_url": "https://api.github.com/users/jannesgg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jannesgg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jannesgg/subscriptions",
"organizations_url": "https://api.github.com/users/jannesgg/orgs",
"repos_url": "https://api.github.com/users/jannesgg/repos",
"events_url": "https://api.github.com/users/jannesgg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jannesgg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5879?src=pr&el=h1) Report\n> Merging [#5879](https://codecov.io/gh/huggingface/transformers/pull/5879?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eae6d8d14f1d25d62c3fe9e7e410607bbaf69787&el=desc) will **increase** coverage by `1.11%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5879?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5879 +/- ##\n==========================================\n+ Coverage 77.54% 78.66% +1.11% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n+ Hits 20318 20609 +291 \n+ Misses 5882 5591 -291 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5879?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5879/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5879/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5879/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5879?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5879?src=pr&el=footer). Last update [eae6d8d...0c9ff72](https://codecov.io/gh/huggingface/transformers/pull/5879?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5879/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5879",
"html_url": "https://github.com/huggingface/transformers/pull/5879",
"diff_url": "https://github.com/huggingface/transformers/pull/5879.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5879.patch",
"merged_at": 1595352433000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5878 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5878/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5878/comments | https://api.github.com/repos/huggingface/transformers/issues/5878/events | https://github.com/huggingface/transformers/pull/5878 | 660,138,504 | MDExOlB1bGxSZXF1ZXN0NDUxOTIxMjk4 | 5,878 | Create README.md | {
"login": "jannesgg",
"id": 36601086,
"node_id": "MDQ6VXNlcjM2NjAxMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/36601086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jannesgg",
"html_url": "https://github.com/jannesgg",
"followers_url": "https://api.github.com/users/jannesgg/followers",
"following_url": "https://api.github.com/users/jannesgg/following{/other_user}",
"gists_url": "https://api.github.com/users/jannesgg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jannesgg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jannesgg/subscriptions",
"organizations_url": "https://api.github.com/users/jannesgg/orgs",
"repos_url": "https://api.github.com/users/jannesgg/repos",
"events_url": "https://api.github.com/users/jannesgg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jannesgg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5878?src=pr&el=h1) Report\n> Merging [#5878](https://codecov.io/gh/huggingface/transformers/pull/5878?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eae6d8d14f1d25d62c3fe9e7e410607bbaf69787&el=desc) will **increase** coverage by `1.11%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5878?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5878 +/- ##\n==========================================\n+ Coverage 77.54% 78.66% +1.11% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n+ Hits 20318 20610 +292 \n+ Misses 5882 5590 -292 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5878?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5878?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5878?src=pr&el=footer). Last update [eae6d8d...e9b1654](https://codecov.io/gh/huggingface/transformers/pull/5878?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5878/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5878",
"html_url": "https://github.com/huggingface/transformers/pull/5878",
"diff_url": "https://github.com/huggingface/transformers/pull/5878.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5878.patch",
"merged_at": 1595352047000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5877 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5877/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5877/comments | https://api.github.com/repos/huggingface/transformers/issues/5877/events | https://github.com/huggingface/transformers/pull/5877 | 660,137,830 | MDExOlB1bGxSZXF1ZXN0NDUxOTIwNjkx | 5,877 | Create README.md | {
"login": "jannesgg",
"id": 36601086,
"node_id": "MDQ6VXNlcjM2NjAxMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/36601086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jannesgg",
"html_url": "https://github.com/jannesgg",
"followers_url": "https://api.github.com/users/jannesgg/followers",
"following_url": "https://api.github.com/users/jannesgg/following{/other_user}",
"gists_url": "https://api.github.com/users/jannesgg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jannesgg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jannesgg/subscriptions",
"organizations_url": "https://api.github.com/users/jannesgg/orgs",
"repos_url": "https://api.github.com/users/jannesgg/repos",
"events_url": "https://api.github.com/users/jannesgg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jannesgg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5877?src=pr&el=h1) Report\n> Merging [#5877](https://codecov.io/gh/huggingface/transformers/pull/5877?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eae6d8d14f1d25d62c3fe9e7e410607bbaf69787&el=desc) will **increase** coverage by `0.78%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5877?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5877 +/- ##\n==========================================\n+ Coverage 77.54% 78.33% +0.78% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n+ Hits 20318 20524 +206 \n+ Misses 5882 5676 -206 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5877?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5877?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5877?src=pr&el=footer). Last update [eae6d8d...c7a4977](https://codecov.io/gh/huggingface/transformers/pull/5877?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5877/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5877",
"html_url": "https://github.com/huggingface/transformers/pull/5877",
"diff_url": "https://github.com/huggingface/transformers/pull/5877.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5877.patch",
"merged_at": 1595352044000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5876 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5876/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5876/comments | https://api.github.com/repos/huggingface/transformers/issues/5876/events | https://github.com/huggingface/transformers/pull/5876 | 660,136,972 | MDExOlB1bGxSZXF1ZXN0NDUxOTE5OTEz | 5,876 | Create README.md | {
"login": "jannesgg",
"id": 36601086,
"node_id": "MDQ6VXNlcjM2NjAxMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/36601086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jannesgg",
"html_url": "https://github.com/jannesgg",
"followers_url": "https://api.github.com/users/jannesgg/followers",
"following_url": "https://api.github.com/users/jannesgg/following{/other_user}",
"gists_url": "https://api.github.com/users/jannesgg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jannesgg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jannesgg/subscriptions",
"organizations_url": "https://api.github.com/users/jannesgg/orgs",
"repos_url": "https://api.github.com/users/jannesgg/repos",
"events_url": "https://api.github.com/users/jannesgg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jannesgg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5876?src=pr&el=h1) Report\n> Merging [#5876](https://codecov.io/gh/huggingface/transformers/pull/5876?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eae6d8d14f1d25d62c3fe9e7e410607bbaf69787&el=desc) will **increase** coverage by `0.93%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5876?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5876 +/- ##\n==========================================\n+ Coverage 77.54% 78.48% +0.93% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n+ Hits 20318 20564 +246 \n+ Misses 5882 5636 -246 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5876?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5876/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5876/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5876/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5876/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5876?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5876?src=pr&el=footer). Last update [eae6d8d...befb998](https://codecov.io/gh/huggingface/transformers/pull/5876?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5876/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5876",
"html_url": "https://github.com/huggingface/transformers/pull/5876",
"diff_url": "https://github.com/huggingface/transformers/pull/5876.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5876.patch",
"merged_at": 1595352503000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5875 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5875/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5875/comments | https://api.github.com/repos/huggingface/transformers/issues/5875/events | https://github.com/huggingface/transformers/pull/5875 | 660,136,297 | MDExOlB1bGxSZXF1ZXN0NDUxOTE5Mjk0 | 5,875 | Create README.md | {
"login": "jannesgg",
"id": 36601086,
"node_id": "MDQ6VXNlcjM2NjAxMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/36601086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jannesgg",
"html_url": "https://github.com/jannesgg",
"followers_url": "https://api.github.com/users/jannesgg/followers",
"following_url": "https://api.github.com/users/jannesgg/following{/other_user}",
"gists_url": "https://api.github.com/users/jannesgg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jannesgg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jannesgg/subscriptions",
"organizations_url": "https://api.github.com/users/jannesgg/orgs",
"repos_url": "https://api.github.com/users/jannesgg/repos",
"events_url": "https://api.github.com/users/jannesgg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jannesgg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5875?src=pr&el=h1) Report\n> Merging [#5875](https://codecov.io/gh/huggingface/transformers/pull/5875?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eae6d8d14f1d25d62c3fe9e7e410607bbaf69787&el=desc) will **increase** coverage by `0.81%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5875?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5875 +/- ##\n==========================================\n+ Coverage 77.54% 78.36% +0.81% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n+ Hits 20318 20531 +213 \n+ Misses 5882 5669 -213 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5875?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5875/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5875/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5875/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `83.98% <0.00%> (-4.91%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5875/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5875/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5875/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-1.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5875/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5875/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5875?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5875?src=pr&el=footer). Last update [eae6d8d...103b7af](https://codecov.io/gh/huggingface/transformers/pull/5875?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5875/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5875",
"html_url": "https://github.com/huggingface/transformers/pull/5875",
"diff_url": "https://github.com/huggingface/transformers/pull/5875.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5875.patch",
"merged_at": 1595352033000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5874 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5874/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5874/comments | https://api.github.com/repos/huggingface/transformers/issues/5874/events | https://github.com/huggingface/transformers/pull/5874 | 660,135,171 | MDExOlB1bGxSZXF1ZXN0NDUxOTE4MjU5 | 5,874 | Create README.md | {
"login": "jannesgg",
"id": 36601086,
"node_id": "MDQ6VXNlcjM2NjAxMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/36601086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jannesgg",
"html_url": "https://github.com/jannesgg",
"followers_url": "https://api.github.com/users/jannesgg/followers",
"following_url": "https://api.github.com/users/jannesgg/following{/other_user}",
"gists_url": "https://api.github.com/users/jannesgg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jannesgg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jannesgg/subscriptions",
"organizations_url": "https://api.github.com/users/jannesgg/orgs",
"repos_url": "https://api.github.com/users/jannesgg/repos",
"events_url": "https://api.github.com/users/jannesgg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jannesgg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5874/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5874",
"html_url": "https://github.com/huggingface/transformers/pull/5874",
"diff_url": "https://github.com/huggingface/transformers/pull/5874.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5874.patch",
"merged_at": 1595352451000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5873 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5873/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5873/comments | https://api.github.com/repos/huggingface/transformers/issues/5873/events | https://github.com/huggingface/transformers/pull/5873 | 660,134,264 | MDExOlB1bGxSZXF1ZXN0NDUxOTE3NDMy | 5,873 | Create README.md | {
"login": "jannesgg",
"id": 36601086,
"node_id": "MDQ6VXNlcjM2NjAxMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/36601086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jannesgg",
"html_url": "https://github.com/jannesgg",
"followers_url": "https://api.github.com/users/jannesgg/followers",
"following_url": "https://api.github.com/users/jannesgg/following{/other_user}",
"gists_url": "https://api.github.com/users/jannesgg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jannesgg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jannesgg/subscriptions",
"organizations_url": "https://api.github.com/users/jannesgg/orgs",
"repos_url": "https://api.github.com/users/jannesgg/repos",
"events_url": "https://api.github.com/users/jannesgg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jannesgg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5873?src=pr&el=h1) Report\n> Merging [#5873](https://codecov.io/gh/huggingface/transformers/pull/5873?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eae6d8d14f1d25d62c3fe9e7e410607bbaf69787&el=desc) will **increase** coverage by `0.79%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5873?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5873 +/- ##\n==========================================\n+ Coverage 77.54% 78.34% +0.79% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n+ Hits 20318 20527 +209 \n+ Misses 5882 5673 -209 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5873?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5873/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5873/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5873/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5873/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5873?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5873?src=pr&el=footer). Last update [eae6d8d...f782f75](https://codecov.io/gh/huggingface/transformers/pull/5873?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5873/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5873",
"html_url": "https://github.com/huggingface/transformers/pull/5873",
"diff_url": "https://github.com/huggingface/transformers/pull/5873.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5873.patch",
"merged_at": 1595352487000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5872 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5872/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5872/comments | https://api.github.com/repos/huggingface/transformers/issues/5872/events | https://github.com/huggingface/transformers/pull/5872 | 660,133,291 | MDExOlB1bGxSZXF1ZXN0NDUxOTE2NTQ2 | 5,872 | Create README.md | {
"login": "jannesgg",
"id": 36601086,
"node_id": "MDQ6VXNlcjM2NjAxMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/36601086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jannesgg",
"html_url": "https://github.com/jannesgg",
"followers_url": "https://api.github.com/users/jannesgg/followers",
"following_url": "https://api.github.com/users/jannesgg/following{/other_user}",
"gists_url": "https://api.github.com/users/jannesgg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jannesgg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jannesgg/subscriptions",
"organizations_url": "https://api.github.com/users/jannesgg/orgs",
"repos_url": "https://api.github.com/users/jannesgg/repos",
"events_url": "https://api.github.com/users/jannesgg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jannesgg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5872?src=pr&el=h1) Report\n> Merging [#5872](https://codecov.io/gh/huggingface/transformers/pull/5872?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eae6d8d14f1d25d62c3fe9e7e410607bbaf69787&el=desc) will **increase** coverage by `1.11%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5872?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5872 +/- ##\n==========================================\n+ Coverage 77.54% 78.66% +1.11% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n+ Hits 20318 20609 +291 \n+ Misses 5882 5591 -291 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5872?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5872/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5872/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5872/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5872?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5872?src=pr&el=footer). Last update [eae6d8d...857c1c2](https://codecov.io/gh/huggingface/transformers/pull/5872?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5872/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5872",
"html_url": "https://github.com/huggingface/transformers/pull/5872",
"diff_url": "https://github.com/huggingface/transformers/pull/5872.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5872.patch",
"merged_at": 1595352022000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5871 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5871/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5871/comments | https://api.github.com/repos/huggingface/transformers/issues/5871/events | https://github.com/huggingface/transformers/pull/5871 | 660,132,278 | MDExOlB1bGxSZXF1ZXN0NDUxOTE1NjEw | 5,871 | Create README.md | {
"login": "jannesgg",
"id": 36601086,
"node_id": "MDQ6VXNlcjM2NjAxMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/36601086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jannesgg",
"html_url": "https://github.com/jannesgg",
"followers_url": "https://api.github.com/users/jannesgg/followers",
"following_url": "https://api.github.com/users/jannesgg/following{/other_user}",
"gists_url": "https://api.github.com/users/jannesgg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jannesgg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jannesgg/subscriptions",
"organizations_url": "https://api.github.com/users/jannesgg/orgs",
"repos_url": "https://api.github.com/users/jannesgg/repos",
"events_url": "https://api.github.com/users/jannesgg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jannesgg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5871?src=pr&el=h1) Report\n> Merging [#5871](https://codecov.io/gh/huggingface/transformers/pull/5871?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eae6d8d14f1d25d62c3fe9e7e410607bbaf69787&el=desc) will **increase** coverage by `1.11%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5871?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5871 +/- ##\n==========================================\n+ Coverage 77.54% 78.66% +1.11% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n+ Hits 20318 20609 +291 \n+ Misses 5882 5591 -291 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5871?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5871/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5871/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5871/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5871?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5871?src=pr&el=footer). Last update [eae6d8d...823159f](https://codecov.io/gh/huggingface/transformers/pull/5871?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5871/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5871",
"html_url": "https://github.com/huggingface/transformers/pull/5871",
"diff_url": "https://github.com/huggingface/transformers/pull/5871.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5871.patch",
"merged_at": 1595351989000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5870 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5870/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5870/comments | https://api.github.com/repos/huggingface/transformers/issues/5870/events | https://github.com/huggingface/transformers/pull/5870 | 660,115,710 | MDExOlB1bGxSZXF1ZXN0NDUxOTAwNDAw | 5,870 | [cleanup] Less aggressive warnings about checkpoint mismatches | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5870?src=pr&el=h1) Report\n> Merging [#5870](https://codecov.io/gh/huggingface/transformers/pull/5870?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dad5e12e54bc2cf80a24b3430b5c847fc213a73e&el=desc) will **decrease** coverage by `0.49%`.\n> The diff coverage is `85.71%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5870?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5870 +/- ##\n==========================================\n- Coverage 78.48% 77.98% -0.50% \n==========================================\n Files 146 146 \n Lines 26200 26203 +3 \n==========================================\n- Hits 20563 20435 -128 \n- Misses 5637 5768 +131 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5870?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5870/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.02% <85.71%> (-0.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5870/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5870/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5870/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5870/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5870/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5870?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5870?src=pr&el=footer). Last update [dad5e12...02f32f1](https://codecov.io/gh/huggingface/transformers/pull/5870?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | These are often spurious, so, in general, I made them shorter, and more about what happened than how to fix it.
Also, removed warnings for missing keys in seq2seq checkpoints. LM head is often made on the fly (to make the download cheaper).
Some examples:
- key 'encoder.version' in `bart.large` should not be trained more
- no keys in `T5ForConditionalGeneration` should be trained more.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5870/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5870",
"html_url": "https://github.com/huggingface/transformers/pull/5870",
"diff_url": "https://github.com/huggingface/transformers/pull/5870.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5870.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5869 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5869/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5869/comments | https://api.github.com/repos/huggingface/transformers/issues/5869/events | https://github.com/huggingface/transformers/issues/5869 | 660,106,421 | MDU6SXNzdWU2NjAxMDY0MjE= | 5,869 | Silenced error while downloading pretrained model | {
"login": "festeh",
"id": 6877858,
"node_id": "MDQ6VXNlcjY4Nzc4NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6877858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/festeh",
"html_url": "https://github.com/festeh",
"followers_url": "https://api.github.com/users/festeh/followers",
"following_url": "https://api.github.com/users/festeh/following{/other_user}",
"gists_url": "https://api.github.com/users/festeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/festeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/festeh/subscriptions",
"organizations_url": "https://api.github.com/users/festeh/orgs",
"repos_url": "https://api.github.com/users/festeh/repos",
"events_url": "https://api.github.com/users/festeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/festeh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This sounds reasonable",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | # 🐛 Bug
Currently, if user failed to download a model, they will receive uninformative and confusing message (see #5787). This is most hurting for users behind a proxy. This happens because of this [line](https://github.com/huggingface/transformers/blob/0533cf470659b97c6279bd04f65536a1ec88404a/src/transformers/file_utils.py#L681). My opinion is that there should be at least a warning, and if `force_download` is `True`, then error should be raised. I can make a PR if you agree with that.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5869/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5868 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5868/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5868/comments | https://api.github.com/repos/huggingface/transformers/issues/5868/events | https://github.com/huggingface/transformers/pull/5868 | 660,101,945 | MDExOlB1bGxSZXF1ZXN0NDUxODg3Njk4 | 5,868 | [cleanup] squad processor | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5868/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5868",
"html_url": "https://github.com/huggingface/transformers/pull/5868",
"diff_url": "https://github.com/huggingface/transformers/pull/5868.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5868.patch",
"merged_at": 1595256251000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5867 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5867/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5867/comments | https://api.github.com/repos/huggingface/transformers/issues/5867/events | https://github.com/huggingface/transformers/pull/5867 | 660,099,777 | MDExOlB1bGxSZXF1ZXN0NDUxODg1Njkz | 5,867 | Update tokenizers to 0.8.1.rc to fix Mac OS X issues | {
"login": "sepal",
"id": 197674,
"node_id": "MDQ6VXNlcjE5NzY3NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/197674?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sepal",
"html_url": "https://github.com/sepal",
"followers_url": "https://api.github.com/users/sepal/followers",
"following_url": "https://api.github.com/users/sepal/following{/other_user}",
"gists_url": "https://api.github.com/users/sepal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sepal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sepal/subscriptions",
"organizations_url": "https://api.github.com/users/sepal/orgs",
"repos_url": "https://api.github.com/users/sepal/repos",
"events_url": "https://api.github.com/users/sepal/events{/privacy}",
"received_events_url": "https://api.github.com/users/sepal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5867?src=pr&el=h1) Report\n> Merging [#5867](https://codecov.io/gh/huggingface/transformers/pull/5867?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ba2400189b2242620868096ae49babf93bd9ce00&el=desc) will **decrease** coverage by `0.39%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5867?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5867 +/- ##\n==========================================\n- Coverage 78.48% 78.09% -0.40% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n- Hits 20564 20461 -103 \n- Misses 5636 5739 +103 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5867?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5867?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5867?src=pr&el=footer). Last update [ba24001...4d35935](https://codecov.io/gh/huggingface/transformers/pull/5867?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Awesome! Thank you @sepal ",
"Could someone provide more detailed instructions to update the tokenizers in the version that solves the issue? I have a mac (10.13) and encountered the same problem. I tried through terminal to pip install tokenizers 0.8.1.rs2 but ended up with this error.\r\n\r\nERROR: Failed building wheel for tokenizers\r\nFailed to build tokenizers\r\nERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly"
] | 1,595 | 1,650 | 1,595 | CONTRIBUTOR | null | As mentioned in huggingface/tokenizers#321 tokenizers and thus transformers fails on macOS 10.11+. The issue was fixed in `0.8.1.rc2` so this PR updates the dependency to the new version.
I ran all tests on my mac with Mac OS High Sierra 10.13.6 as desribed in the [readme](https://github.com/huggingface/transformers/blob/master/README.md#tests), but not the examples:
```
1607 passed, 304 skipped, 15294 warnings in 465.74s (0:07:45)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5867/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5867",
"html_url": "https://github.com/huggingface/transformers/pull/5867",
"diff_url": "https://github.com/huggingface/transformers/pull/5867.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5867.patch",
"merged_at": 1595074812000
} |
https://api.github.com/repos/huggingface/transformers/issues/5866 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5866/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5866/comments | https://api.github.com/repos/huggingface/transformers/issues/5866/events | https://github.com/huggingface/transformers/pull/5866 | 660,093,253 | MDExOlB1bGxSZXF1ZXN0NDUxODc5NzA3 | 5,866 | T5Tokenizer adds EOS token if not already added | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5866?src=pr&el=h1) Report\n> Merging [#5866](https://codecov.io/gh/huggingface/transformers/pull/5866?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7e6397a7d8e7433aa4c4cafba98e08e5c73f087c?el=desc) will **decrease** coverage by `0.67%`.\n> The diff coverage is `89.47%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5866?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5866 +/- ##\n==========================================\n- Coverage 80.10% 79.42% -0.68% \n==========================================\n Files 156 156 \n Lines 28411 28426 +15 \n==========================================\n- Hits 22758 22578 -180 \n- Misses 5653 5848 +195 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5866?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.32% <89.47%> (-1.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/5866/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5866?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5866?src=pr&el=footer). Last update [7e6397a...d977bff](https://codecov.io/gh/huggingface/transformers/pull/5866?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"FWIW, I get identical results of `27.84` with this branch and the master.",
"Happy to eventually remove the check to see if it's already there.",
"I think we can keep it like this right now with a warning for future versions. It would create a breaking change to users, and I feel it would be especially hard to debug an unknown drop in performance due to an additional token being added, right?",
"Will this behavior cause problems for the unsupervised setting? Per the [docs](https://huggingface.co/transformers/model_doc/t5.html#training), `</s>` is not added during denoising training:\r\n\r\n```\r\ninput_ids = tokenizer.encode('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt')\r\nlabels = tokenizer.encode('<extra_id_0> cute dog <extra_id_1> the <extra_id_2> </s>', return_tensors='pt')\r\nmodel(input_ids=input_ids, labels=labels)\r\n```\r\n\r\nNot sure if this will cause problems. (Also, Aa a somewhat related question, should the sentinel tokens in the `labels` be excluded from the loss in this setting, as I believe is the case with `[MASK]` in BERT?).",
"I'm not sure about either question:\r\nmade an issue verifying the docs: https://github.com/huggingface/transformers/issues/7904\r\nFeel free to make an issue about the sentinel tokens question. I'd tag thomwolf/patrickvonplaten.\r\n"
] | 1,595 | 1,603 | 1,598 | CONTRIBUTOR | null | T5 Tokenizer should add `</s>` to the end of sequences. Since some users are doing this on their own, this PR only adds `</s>` if it has already been added.
On my machine, this makes zero shot validation BLEU go from 27.87 -> 27.65. Since this change is needed for finetuning, and the empirical difference is small and doesn't happen on Stas' machine, I would recommend merging this.
If others want to test, the command takes about 3 mins to run on brutasse.
### Zero Shot BLEU Scores
For english -> romanian
I grabbed the WMT english-romanian dataset:
```bash
wget https://s3.amazonaws.com/datasets.huggingface.co/translation/wmt_en_ro.tar.gz
tar -xzvf wmt_en_ro.tar.gz
```
Then ran evaluation (without finetuning) on the validation split:
```bash
export DATA_DIR=wmt_en_ro
python run_eval.py t5-base \
$DATA_DIR/val.source t5_val_generations.txt \
--reference_path $DATA_DIR/val.target \
--score_path t5_enro_bleu_eos.json \
--task translation_en_to_ro \
--device cuda \
--fp16 \
--bs 32
```
(this branch) (with EOS):27.65
master (no EOS): 27.87
```
sacrebleu==1.4.3
torch==1.5.1
```
Will merge and fix tests if others have positive results. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5866/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/5866/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5866",
"html_url": "https://github.com/huggingface/transformers/pull/5866",
"diff_url": "https://github.com/huggingface/transformers/pull/5866.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5866.patch",
"merged_at": 1598381769000
} |
https://api.github.com/repos/huggingface/transformers/issues/5865 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5865/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5865/comments | https://api.github.com/repos/huggingface/transformers/issues/5865/events | https://github.com/huggingface/transformers/pull/5865 | 660,089,280 | MDExOlB1bGxSZXF1ZXN0NDUxODc2MDI4 | 5,865 | [seq2seq] distillation.py accepts trainer arguments | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5865?src=pr&el=h1) Report\n> Merging [#5865](https://codecov.io/gh/huggingface/transformers/pull/5865?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/529850ae7bca0ff388778c3c0d66240834cf56c3&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5865?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5865 +/- ##\n=======================================\n Coverage 78.48% 78.48% \n=======================================\n Files 146 146 \n Lines 26200 26200 \n=======================================\n Hits 20563 20563 \n Misses 5637 5637 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5865?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5865?src=pr&el=footer). Last update [529850a...259bc29](https://codecov.io/gh/huggingface/transformers/pull/5865?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | cc @nateraw | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5865/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5865",
"html_url": "https://github.com/huggingface/transformers/pull/5865",
"diff_url": "https://github.com/huggingface/transformers/pull/5865.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5865.patch",
"merged_at": 1595072638000
} |
https://api.github.com/repos/huggingface/transformers/issues/5864 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5864/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5864/comments | https://api.github.com/repos/huggingface/transformers/issues/5864/events | https://github.com/huggingface/transformers/pull/5864 | 660,079,943 | MDExOlB1bGxSZXF1ZXN0NDUxODY3NDYw | 5,864 | Create README.md | {
"login": "tuner007",
"id": 46425391,
"node_id": "MDQ6VXNlcjQ2NDI1Mzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/46425391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuner007",
"html_url": "https://github.com/tuner007",
"followers_url": "https://api.github.com/users/tuner007/followers",
"following_url": "https://api.github.com/users/tuner007/following{/other_user}",
"gists_url": "https://api.github.com/users/tuner007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuner007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuner007/subscriptions",
"organizations_url": "https://api.github.com/users/tuner007/orgs",
"repos_url": "https://api.github.com/users/tuner007/repos",
"events_url": "https://api.github.com/users/tuner007/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuner007/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5864?src=pr&el=h1) Report\n> Merging [#5864](https://codecov.io/gh/huggingface/transformers/pull/5864?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ba2400189b2242620868096ae49babf93bd9ce00&el=desc) will **decrease** coverage by `0.39%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5864?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5864 +/- ##\n==========================================\n- Coverage 78.48% 78.09% -0.40% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n- Hits 20564 20460 -104 \n- Misses 5636 5740 +104 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5864?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5864/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5864?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5864?src=pr&el=footer). Last update [ba24001...84114cc](https://codecov.io/gh/huggingface/transformers/pull/5864?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5864/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5864",
"html_url": "https://github.com/huggingface/transformers/pull/5864",
"diff_url": "https://github.com/huggingface/transformers/pull/5864.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5864.patch",
"merged_at": 1595351707000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5863 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5863/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5863/comments | https://api.github.com/repos/huggingface/transformers/issues/5863/events | https://github.com/huggingface/transformers/issues/5863 | 660,007,343 | MDU6SXNzdWU2NjAwMDczNDM= | 5,863 | not able to reproduce accuracy at the end of same epoch | {
"login": "mithunpaul08",
"id": 1056029,
"node_id": "MDQ6VXNlcjEwNTYwMjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1056029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mithunpaul08",
"html_url": "https://github.com/mithunpaul08",
"followers_url": "https://api.github.com/users/mithunpaul08/followers",
"following_url": "https://api.github.com/users/mithunpaul08/following{/other_user}",
"gists_url": "https://api.github.com/users/mithunpaul08/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mithunpaul08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mithunpaul08/subscriptions",
"organizations_url": "https://api.github.com/users/mithunpaul08/orgs",
"repos_url": "https://api.github.com/users/mithunpaul08/repos",
"events_url": "https://api.github.com/users/mithunpaul08/events{/privacy}",
"received_events_url": "https://api.github.com/users/mithunpaul08/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,602 | 1,602 | NONE | null | # 🐛 Bug
update: Found what is causing this bug
Your optimization [code](https://github.com/huggingface/transformers/blob/master/src/transformers/optimization.py#L70) uses a linear scheduler for learning rate with warmup which is **dependent on the total number of training steps** (== no of batches*total epochs). This injects randomness (as the total epoch count changes) during every scheduler.step(), which affects the weights of the trained model, which in turn reflects in accuracy....
The effect of this is very minimal in the in-domain setting. However, since I was evaluating on a cross-domain-setting it ended up being heavily amplified. Can you please make the warm up dependent on the current epoch and not on total epochs/step size.
----original post
When am running say for 25 epochs, I wanted to run an eval on dev and test partition at the end of every epoch. So I moved the evaluate() code in trainery.py to the end of all batches. However the accuracy I get at the end of say epoch 1 (in the big 25 epoch run) is different than what I get when I run the entire code for just 1 epoch. This might be the same answer to [this](https://github.com/huggingface/transformers/issues/5264) issue also
## Information
Model I am using (Bert, XLNet ...):Bert (base-uncased)
Language I am using the model on (English, Chinese ...):English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below): the only modification is I call the same evaluate function to after epochs. You can see that [here](https://github.com/mithunpaul08/transformers/blob/master/src/transformers/trainer.py#L1592).
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x] my own task or dataset: (give details below)
I am training and testing on [FEVER](https://s3-eu-west-1.amazonaws.com/fever.public/wiki-pages.zip) . My code should download it automatically though.
## To reproduce
Steps to reproduce the behavior:
- Kindly please clone my [repo](https://github.com/mithunpaul08/transformers)
- conda create --name huggingface python=3
- source activate huggingface
- pip install -r ./examples/requirements.txt
from home folder (i.e /huggingface/) do:
- cd mithun_scripts/
- ./run_all.sh
(should run i think)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5863/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5862 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5862/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5862/comments | https://api.github.com/repos/huggingface/transformers/issues/5862/events | https://github.com/huggingface/transformers/issues/5862 | 659,930,066 | MDU6SXNzdWU2NTk5MzAwNjY= | 5,862 | Potential security vulnerability regarding Hosted Interface API? | {
"login": "KerenzaDoxolodeo",
"id": 7535438,
"node_id": "MDQ6VXNlcjc1MzU0Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7535438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KerenzaDoxolodeo",
"html_url": "https://github.com/KerenzaDoxolodeo",
"followers_url": "https://api.github.com/users/KerenzaDoxolodeo/followers",
"following_url": "https://api.github.com/users/KerenzaDoxolodeo/following{/other_user}",
"gists_url": "https://api.github.com/users/KerenzaDoxolodeo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KerenzaDoxolodeo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KerenzaDoxolodeo/subscriptions",
"organizations_url": "https://api.github.com/users/KerenzaDoxolodeo/orgs",
"repos_url": "https://api.github.com/users/KerenzaDoxolodeo/repos",
"events_url": "https://api.github.com/users/KerenzaDoxolodeo/events{/privacy}",
"received_events_url": "https://api.github.com/users/KerenzaDoxolodeo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Your `requests` syntax is incorrect, you're not actually sending the request body as json.\r\n\r\nThis works:\r\n```python\r\nanswer = requests.post(\r\n \"https://api-inference.huggingface.co/models/distilbert-base-cased-distilled-squad\",\r\n json={\"question\": \"What is my name?\", \"context\": \"My name is Batik\"},\r\n)\r\n```\r\n\r\nRegarding the stack trace, I do not see an issue with returning it (it's json-encoded), happy to discuss more."
] | 1,595 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): distilbert-base-cased-distilled-squad with huggingface.co's API
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: Question and answering
## To reproduce
Steps to reproduce the behavior:
Run this code on a notebook
```
answer = requests.post('https://api-inference.huggingface.co/models/distilbert-base-cased-distilled-squad',
headers = {"Content-Type": "application/json"},
data = {"question":'What is my name?' ,
"context" :"My name is Batik"})
```
Result : 500 Error . With the following content when I executed `print(answer.content)`
```
b'{"error":"JSONDecodeError(\'Expecting value: line 1 column 1 (char 0)\')","traceback":" File \\"/home/hf/api-inference/server.py\\", line 251, in model_forward\\n inputs = await request.json()\\n File \\"/home/hf/api-inference/.env/lib/python3.8/site-packages/starlette/requests.py\\", line 227, in json\\n self._json = json.loads(body)\\n File \\"/usr/lib/python3.8/json/__init__.py\\", line 357, in loads\\n return _default_decoder.decode(s)\\n File \\"/usr/lib/python3.8/json/decoder.py\\", line 337, in decode\\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\\n File \\"/usr/lib/python3.8/json/decoder.py\\", line 355, in raw_decode\\n raise JSONDecodeError(\\"Expecting value\\", s, err.value) from None\\n"}'
```
## Expected behavior
I imagine we don't want to give the Error traceback for security reasons
## Environment info
I tried using Python's request and Postman and have the same outcome. Furthermore, I'm wondering why I have a Error 500. Was my code wrong?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5862/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5861 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5861/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5861/comments | https://api.github.com/repos/huggingface/transformers/issues/5861/events | https://github.com/huggingface/transformers/pull/5861 | 659,879,619 | MDExOlB1bGxSZXF1ZXN0NDUxNjgyNTM4 | 5,861 | [seq2seq] add back clargs | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @nateraw ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5861?src=pr&el=h1) Report\n> Merging [#5861](https://codecov.io/gh/huggingface/transformers/pull/5861?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/529850ae7bca0ff388778c3c0d66240834cf56c3&el=desc) will **decrease** coverage by `0.25%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5861?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5861 +/- ##\n==========================================\n- Coverage 78.48% 78.23% -0.26% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n- Hits 20563 20497 -66 \n- Misses 5637 5703 +66 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5861?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-2.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5861?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5861?src=pr&el=footer). Last update [529850a...c7b897e](https://codecov.io/gh/huggingface/transformers/pull/5861?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"better way in #5865 "
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5861/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5861",
"html_url": "https://github.com/huggingface/transformers/pull/5861",
"diff_url": "https://github.com/huggingface/transformers/pull/5861.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5861.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5860 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5860/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5860/comments | https://api.github.com/repos/huggingface/transformers/issues/5860/events | https://github.com/huggingface/transformers/issues/5860 | 659,876,511 | MDU6SXNzdWU2NTk4NzY1MTE= | 5,860 | issue with loading pretrained model - xlnet | {
"login": "LoriRongrong",
"id": 42275621,
"node_id": "MDQ6VXNlcjQyMjc1NjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/42275621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LoriRongrong",
"html_url": "https://github.com/LoriRongrong",
"followers_url": "https://api.github.com/users/LoriRongrong/followers",
"following_url": "https://api.github.com/users/LoriRongrong/following{/other_user}",
"gists_url": "https://api.github.com/users/LoriRongrong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LoriRongrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LoriRongrong/subscriptions",
"organizations_url": "https://api.github.com/users/LoriRongrong/orgs",
"repos_url": "https://api.github.com/users/LoriRongrong/repos",
"events_url": "https://api.github.com/users/LoriRongrong/events{/privacy}",
"received_events_url": "https://api.github.com/users/LoriRongrong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
I ran the following code:
from transformers import XLNetTokenizer, XLNetForSequenceClassification
import torch
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
model = XLNetForSequenceClassification.from_pretrained('xlnet-base-cased')
## Details
<!-- Description of your issue -->
and I have the following error:
> OSError: Can't load config for 'xlnet-base-cased'. Make sure that:
> 'xlnet-base-cased' is a correct model identifier listed on 'https://huggingface.co/models'
> or 'xlnet-base-cased' is the correct path to a directory containing a config.json file
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
The error didn't appear when I ran the same piece of code two days ago so I think it might be caused by recent update. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5860/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5859 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5859/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5859/comments | https://api.github.com/repos/huggingface/transformers/issues/5859/events | https://github.com/huggingface/transformers/issues/5859 | 659,841,909 | MDU6SXNzdWU2NTk4NDE5MDk= | 5,859 | [seq2seq] organize commands into scripts/ subdir | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | CONTRIBUTOR | null | whoever takes this, please test them after! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5859/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5858 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5858/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5858/comments | https://api.github.com/repos/huggingface/transformers/issues/5858/events | https://github.com/huggingface/transformers/pull/5858 | 659,833,873 | MDExOlB1bGxSZXF1ZXN0NDUxNjM5OTIz | 5,858 | wrong args name: n_gpu -> gpus | {
"login": "donglixp",
"id": 1070872,
"node_id": "MDQ6VXNlcjEwNzA4NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1070872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donglixp",
"html_url": "https://github.com/donglixp",
"followers_url": "https://api.github.com/users/donglixp/followers",
"following_url": "https://api.github.com/users/donglixp/following{/other_user}",
"gists_url": "https://api.github.com/users/donglixp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donglixp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donglixp/subscriptions",
"organizations_url": "https://api.github.com/users/donglixp/orgs",
"repos_url": "https://api.github.com/users/donglixp/repos",
"events_url": "https://api.github.com/users/donglixp/events{/privacy}",
"received_events_url": "https://api.github.com/users/donglixp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5858?src=pr&el=h1) Report\n> Merging [#5858](https://codecov.io/gh/huggingface/transformers/pull/5858?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/615be03f9d961c0c9722fe10e7830e011066772e&el=desc) will **decrease** coverage by `0.20%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5858?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5858 +/- ##\n==========================================\n- Coverage 78.66% 78.46% -0.21% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n- Hits 20611 20558 -53 \n- Misses 5589 5642 +53 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5858?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5858/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5858/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-1.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5858/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5858?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5858?src=pr&el=footer). Last update [615be03...713f748](https://codecov.io/gh/huggingface/transformers/pull/5858?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"great catch! was fixed on master by #5798 so I don't think we need this. Let us know if further issues!"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | The correct argument name should be `gpus`.
The change fixed the following error:
```bash
warnings.warn(*args, **kwargs)
Traceback (most recent call last):
File "/home/.conda/envs/hf/lib/python3.6/site-packages/pytorch_lightning/utilities/parsing.py", line 114, in __getattr__
return self[key]
KeyError: 'n_gpu'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5858/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5858",
"html_url": "https://github.com/huggingface/transformers/pull/5858",
"diff_url": "https://github.com/huggingface/transformers/pull/5858.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5858.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5857 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5857/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5857/comments | https://api.github.com/repos/huggingface/transformers/issues/5857/events | https://github.com/huggingface/transformers/pull/5857 | 659,683,551 | MDExOlB1bGxSZXF1ZXN0NDUxNTAxNTE1 | 5,857 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5857?src=pr&el=h1) Report\n> Merging [#5857](https://codecov.io/gh/huggingface/transformers/pull/5857?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/615be03f9d961c0c9722fe10e7830e011066772e&el=desc) will **decrease** coverage by `0.58%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5857?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5857 +/- ##\n==========================================\n- Coverage 78.66% 78.08% -0.59% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n- Hits 20611 20459 -152 \n- Misses 5589 5741 +152 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5857?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5857/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5857/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5857/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5857/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5857/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5857/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5857?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5857?src=pr&el=footer). Last update [615be03...e7f330a](https://codecov.io/gh/huggingface/transformers/pull/5857?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Add nlp dataset used | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5857/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5857",
"html_url": "https://github.com/huggingface/transformers/pull/5857",
"diff_url": "https://github.com/huggingface/transformers/pull/5857.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5857.patch",
"merged_at": 1595351668000
} |
https://api.github.com/repos/huggingface/transformers/issues/5856 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5856/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5856/comments | https://api.github.com/repos/huggingface/transformers/issues/5856/events | https://github.com/huggingface/transformers/pull/5856 | 659,682,507 | MDExOlB1bGxSZXF1ZXN0NDUxNTAwNTQ2 | 5,856 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"👍 awesome\r\n",
"Even if dataset is not in `nlp` yet, you can still link to it, as it will display a missing dataset page with a call to action to add it to nlp: e.g. https://huggingface.co/datasets/dlkfjldsfjdslf"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Add dataset used as it is now part of nlp package | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5856/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5856",
"html_url": "https://github.com/huggingface/transformers/pull/5856",
"diff_url": "https://github.com/huggingface/transformers/pull/5856.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5856.patch",
"merged_at": 1595351468000
} |
https://api.github.com/repos/huggingface/transformers/issues/5855 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5855/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5855/comments | https://api.github.com/repos/huggingface/transformers/issues/5855/events | https://github.com/huggingface/transformers/pull/5855 | 659,654,472 | MDExOlB1bGxSZXF1ZXN0NDUxNDc1NDIy | 5,855 | docs: fix model sharing file names | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5855?src=pr&el=h1) Report\n> Merging [#5855](https://codecov.io/gh/huggingface/transformers/pull/5855?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8279471506fab5733dab3e2d3a1542010c976d8a?el=desc) will **decrease** coverage by `2.76%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5855?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5855 +/- ##\n==========================================\n- Coverage 79.87% 77.11% -2.77% \n==========================================\n Files 181 181 \n Lines 35788 35788 \n==========================================\n- Hits 28587 27597 -990 \n- Misses 7201 8191 +990 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5855?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/5855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `24.25% <0.00%> (-73.56%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.13% <0.00%> (-0.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.48% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.90% <0.00%> (+0.63%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5855/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `87.04% <0.00%> (+1.03%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/5855/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5855?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5855?src=pr&el=footer). Last update [8279471...d60525f](https://codecov.io/gh/huggingface/transformers/pull/5855?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"It's more general than vocab.txt or vocab.json so I'll let @sgugger chime in as I think he wrote those lines.",
"I didn't know that when I wrote it (and the file was vocab.json for me ;-) ). This looks good to me with the suggestion, thanks for clarifying!",
"Looks good on my side, the only file removed from my example would be `training_args.bin` which makes sense.",
"Let me know if I need to do any other changes",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Oops, sorry this got lost in my notifications. Don't hesitate to ping me if it happens again @borisdayma, will merge as soon as the CI is green."
] | 1,595 | 1,601 | 1,601 | CONTRIBUTOR | null | Documentation references `vocab.txt` while I think it should be `vocab.json`.
Additional notes:
* I also have `training_args.bin` and `merges.txt` which I understand are not needed to be uploaded
* I didn't get `added_tokens.json` but I understand it's not always present | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5855/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5855",
"html_url": "https://github.com/huggingface/transformers/pull/5855",
"diff_url": "https://github.com/huggingface/transformers/pull/5855.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5855.patch",
"merged_at": 1601295451000
} |
https://api.github.com/repos/huggingface/transformers/issues/5854 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5854/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5854/comments | https://api.github.com/repos/huggingface/transformers/issues/5854/events | https://github.com/huggingface/transformers/pull/5854 | 659,512,766 | MDExOlB1bGxSZXF1ZXN0NDUxMzQ4MTMx | 5,854 | Revert "XLNet `use_cache` refactor" | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Reverts huggingface/transformers#5770 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5854/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5854/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5854",
"html_url": "https://github.com/huggingface/transformers/pull/5854",
"diff_url": "https://github.com/huggingface/transformers/pull/5854.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5854.patch",
"merged_at": 1595010825000
} |
https://api.github.com/repos/huggingface/transformers/issues/5853 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5853/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5853/comments | https://api.github.com/repos/huggingface/transformers/issues/5853/events | https://github.com/huggingface/transformers/issues/5853 | 659,428,139 | MDU6SXNzdWU2NTk0MjgxMzk= | 5,853 | [BartModel] Question for BartModel Output Shape when I pass the 'decoder_input_ids' | {
"login": "Bannng",
"id": 51171232,
"node_id": "MDQ6VXNlcjUxMTcxMjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/51171232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bannng",
"html_url": "https://github.com/Bannng",
"followers_url": "https://api.github.com/users/Bannng/followers",
"following_url": "https://api.github.com/users/Bannng/following{/other_user}",
"gists_url": "https://api.github.com/users/Bannng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bannng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bannng/subscriptions",
"organizations_url": "https://api.github.com/users/Bannng/orgs",
"repos_url": "https://api.github.com/users/Bannng/repos",
"events_url": "https://api.github.com/users/Bannng/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bannng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
Hi I'm trying to using model named 'BartForConditionalGeneration' for text generation,
The main question and problem is, when I pass the 'decoder_input_ids' to the model as the input of forward in model,
the first index of model's output shows the tensor shape as [batch_size, 1 , vocab_size] all the time.
I might misunderstood the model and Docs of Bart overall,
when I only pass the Input_ids without decoder_input_ids, it worked right.
but It never make the shape of tensor like [batch_size, target_seq_len, vocab_size] when I put 'decoder_input_ids'
I think when I put the decoder_input_ids as [batch_size, target_seq_len], the Model's output should be like
[batch_size, target_seq_len, vocab_size] (for prediction scores)
the reason why I tried to pass the decoder_input_ids was , I think Bart is seq2seq model, and their output logits will be computed just like the basic transformer, so I wanted to pretrain this model.
Please help me to solve this problem. If I misunderstood the concept of Bart at all, just comment me.
I need a clean view for my situation
I used mBartTokenizer and also 'facebook/mbart-large-en-ro' pretrained weights,
I used Model BartForConditionalGeneration.from_pretrained('facebook/mbart-large-en-ro')
It also didn't worked for normal BartModel (with same pretrained config)
I'm really confused about it so the whole question looks like a mass, Sorry for that
Thank you for reading my questions. Please Help Me.

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5853/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5852 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5852/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5852/comments | https://api.github.com/repos/huggingface/transformers/issues/5852/events | https://github.com/huggingface/transformers/issues/5852 | 659,422,542 | MDU6SXNzdWU2NTk0MjI1NDI= | 5,852 | Exception in device=TPU:1: 'ascii' codec can't decode byte 0xc2 in position 37: ordinal not in range(128) | {
"login": "marton-avrios",
"id": 59836119,
"node_id": "MDQ6VXNlcjU5ODM2MTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/59836119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marton-avrios",
"html_url": "https://github.com/marton-avrios",
"followers_url": "https://api.github.com/users/marton-avrios/followers",
"following_url": "https://api.github.com/users/marton-avrios/following{/other_user}",
"gists_url": "https://api.github.com/users/marton-avrios/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marton-avrios/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marton-avrios/subscriptions",
"organizations_url": "https://api.github.com/users/marton-avrios/orgs",
"repos_url": "https://api.github.com/users/marton-avrios/repos",
"events_url": "https://api.github.com/users/marton-avrios/events{/privacy}",
"received_events_url": "https://api.github.com/users/marton-avrios/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"...it also happens on CPU",
"I solved it. If you happen to use a Google Cloud VM with the `Debian GNU/Linux 9 Stretch + PyTorch/XLA` image then locale is not set correctly. Add the following to `/etc/default/locale`:\r\n```\r\nLC_ALL=en_US.UTF-8\r\n```",
"...thank you, it works!"
] | 1,595 | 1,595 | 1,595 | NONE | null | I get the error message when trying to run the seq2seq example on 8 TPU cores on XSUM and bart-large.
Run this from `examples/seq2seq`:
```
export PYTHONPATH="../":"${PYTHONPATH}"
python finetune.py \
--learning_rate=3e-5 \
--gpus 0 \
--n_tpu_cores 8 \
--do_train \
--do_predict \
--n_val 1000 \
--val_check_interval 1.0 \
--sortish_sampler \
--data_dir ${PWD}/xsum \
--train_batch_size=1 \
--eval_batch_size=1 \
--output_dir=xsum_results \
--num_train_epochs 1 \
--model_name_or_path facebook/bart-large
```
BTW there is also a bug in `lightning_base.py`. The `gpus` argument is given twice if `n_tpu_cores` is specified (second time through `**train_params`) so I replaced Trainer creation lines with these:
```
n_gpus = train_params.pop("gpus", args.gpus)
trainer = pl.Trainer(
logger=logger,
accumulate_grad_batches=args.gradient_accumulation_steps,
gpus=n_gpus,
max_epochs=args.num_train_epochs,
early_stop_callback=early_stopping_callback,
gradient_clip_val=args.max_grad_norm,
checkpoint_callback=checkpoint_callback,
callbacks=[logging_callback] + extra_callbacks,
fast_dev_run=args.fast_dev_run,
val_check_interval=args.val_check_interval,
weights_summary=None,
resume_from_checkpoint=args.resume_from_checkpoint,
**train_params,
)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5852/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5851 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5851/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5851/comments | https://api.github.com/repos/huggingface/transformers/issues/5851/events | https://github.com/huggingface/transformers/issues/5851 | 659,410,150 | MDU6SXNzdWU2NTk0MTAxNTA= | 5,851 | Covid-19 - TPU V3-1024 - T5 11B: Tensorflow to Pytorch conversion failed | {
"login": "agemagician",
"id": 6087313,
"node_id": "MDQ6VXNlcjYwODczMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agemagician",
"html_url": "https://github.com/agemagician",
"followers_url": "https://api.github.com/users/agemagician/followers",
"following_url": "https://api.github.com/users/agemagician/following{/other_user}",
"gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agemagician/subscriptions",
"organizations_url": "https://api.github.com/users/agemagician/orgs",
"repos_url": "https://api.github.com/users/agemagician/repos",
"events_url": "https://api.github.com/users/agemagician/events{/privacy}",
"received_events_url": "https://api.github.com/users/agemagician/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@thomwolf @LysandreJik @julien-c @patrickvonplaten @VictorSanh @sshleifer @mfuntowicz @sgugger your feedback will be highly appreciated.",
"I have checked the \"operative_config.gin\", and this is the encoder/decoder configuration 👍 \r\n\r\n```\r\n\r\n# Parameters for decoder/LayerStack:\r\n# ==============================================================================\r\ndecoder/LayerStack.dropout_rate = None\r\ndecoder/LayerStack.norm_epsilon = None\r\ndecoder/LayerStack.recompute_grads = False\r\ndecoder/LayerStack.sublayers_final = \\\r\n [@transformer.sublayer_rms_norm, @transformer.sublayer_dropout]\r\ndecoder/LayerStack.sublayers_initial = [@transformer.sublayer_dropout]\r\ndecoder/LayerStack.sublayers_per_layer = \\\r\n [@transformer.sublayer_rms_norm,\r\n @transformer.sublayer_call_layer,\r\n @transformer.sublayer_dropout,\r\n @transformer.sublayer_residual]\r\n\r\n# Parameters for encoder/LayerStack:\r\n# ==============================================================================\r\nencoder/LayerStack.dropout_rate = None\r\nencoder/LayerStack.norm_epsilon = None\r\nencoder/LayerStack.recompute_grads = False\r\nencoder/LayerStack.sublayers_final = \\\r\n [@transformer.sublayer_rms_norm, @transformer.sublayer_dropout]\r\nencoder/LayerStack.sublayers_initial = [@transformer.sublayer_dropout]\r\nencoder/LayerStack.sublayers_per_layer = \\\r\n [@transformer.sublayer_rms_norm,\r\n @transformer.sublayer_call_layer,\r\n @transformer.sublayer_dropout,\r\n @transformer.sublayer_residual]\r\n\r\n# Parameters for make_bitransformer:\r\n# ==============================================================================\r\nmake_bitransformer.decoder_name = 'decoder'\r\nmake_bitransformer.encoder_name = 'encoder'\r\n\r\n# Parameters for decoder/make_layer_stack:\r\n# ==============================================================================\r\ndecoder/make_layer_stack.block_scope = True\r\ndecoder/make_layer_stack.layers = \\\r\n [@mesh_tensorflow.transformer.transformer_layers.SelfAttention,\r\n @mesh_tensorflow.transformer.transformer_layers.EncDecAttention,\r\n @mesh_tensorflow.transformer.transformer_layers.DenseReluDense]\r\ndecoder/make_layer_stack.num_layers = %num_layers\r\n\r\n# Parameters for encoder/make_layer_stack:\r\n# ==============================================================================\r\nencoder/make_layer_stack.block_scope = True\r\nencoder/make_layer_stack.layers = \\\r\n [@mesh_tensorflow.transformer.transformer_layers.SelfAttention,\r\n @mesh_tensorflow.transformer.transformer_layers.DenseReluDense]\r\nencoder/make_layer_stack.num_layers = %num_layers\r\n\r\n```",
"I also upgraded the transformers package to the latest version, but still the same problem. ",
"I also tried to load the model using the TF checkpoint directly and it doesn't work:\r\n\r\n```\r\nimport transformers\r\n\r\ntokenizer = transformers.T5Tokenizer.from_pretrained(\"xxx\")\r\nconfig = transformers.T5Config.from_json_file(\"t5-11b-config.json\")\r\nmodel = transformers.TFT5Model(config,\"xxx/\")\r\n\r\nencoded_input = tokenizer(\"A A\", return_tensors='tf')\r\n\r\nmodel(encoded_input)\r\n```\r\nTensor:\r\n`{'input_ids': <tf.Tensor: shape=(1, 2), dtype=int32, numpy=array([[71, 71]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(1, 2), dtype=int32, numpy=array([[1, 1]], dtype=int32)>}`\r\n\r\n\r\nError:\r\n```\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/modeling_tf_t5.py\", line 1064, in call\r\n training=training,\r\n File \"/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/modeling_tf_t5.py\", line 610, in call\r\n raise ValueError(\"You have to specify either inputs or inputs_embeds\")\r\nValueError: You have to specify either inputs or inputs_embeds\r\n```",
"Not solving your issue but curious, are you training your model on TPU with pytorch? I'm interested to know if pytorch or transformers supports training on v3-1024 TPU. ",
"@misrasaurabh1 No we didn't use the pytorch version.",
"@sshleifer @patrickvonplaten Any update regarding the conversion from TF to Pytorch ?",
"For your first issue, I wonder if the t5 repo has changed, because our conversion script still works on a t5-base checkpoint I received last week. The checkpoint was probably trained a while ago though. \r\nwe also can't load t5-11b in the inference API, although the traceback is different.\r\n@mfuntowicz have you looked at that at all/found anything?\r\n\r\n\r\nI can help with your second issue. Try sending unpacking the elements of your input dict, so that they comply with the new T5 signature on master:\r\nhttps://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_t5.py#L1115",
"For the second issue, I was able to solve it by using the following:\r\n```\r\nids = tokenizer.encode(seq, return_tensors='tf')\r\nembedding = model(ids,decoder_input_ids=ids)\r\n```\r\nFor some reason, I have to send also the ids as decoder_input_ids.\r\n\r\nIs this correct @sshleifer or did I miss something ?\r\n\r\nFor the first issue, I could not solve it.\r\n\r\nI can send you the T5 11B checkpoint privately if you need it.",
"Regarding \"we also can't load t5-11b in the inference API, although the traceback is different.\":\r\n\r\nThis applies to both Tensorflow and Pytorch versions ?",
"Please see https://github.com/huggingface/transformers/issues/5986#issuecomment-663090043",
"@julien-c :\r\nMy 11B checkpoint is only 20GB. It was trained on very small vocab.\r\nI am not trying to use the pertained t5-11b from T5.\r\nIt is a custom new pertained model. \r\n\r\nI don't believe #5986 (comment) is related to my problem.",
"Ok, my apologies for not reading your issue carefully enough. For the inference API, that's probably the reason though.",
"No problem, my main issue is the conversion from tensorflow to Pytorch for my new pertained 11B model.",
"@misrasaurabh1 @patrickvonplaten Any progress ?",
"I will take a deeper look next week on Monday when I'm back regarding the conversion from tensorflow to PyTorch.\r\n\r\nRegarding, the `decoder_input_ids` (\"For some reason, I have to send also the ids as decoder_input_ids.\") -> yes for T5 you have to input both `input_ids` and `decoder_input_ids`, for the first forward call this is usually:\r\n\r\n```python\r\nfrom transformers import T5Model, T5Tokenizer\r\nimport torch\r\n\r\nmodel = T5Model.from_pretrained(\"t5-small\") # your model of choice here\r\ntokenizer = T5Tokenizer.from_pretrained(\"t5-small\")\r\n\r\ninput_ids = tokenizer(\"This sentence is encoded and only has to be passed once.\", return_tensors=\"pt\").input_ids\r\ndecoder_input_ids = torch.tensor([[model.config.decoder_start_token_id]])\r\n\r\n# first call uses the start token id for T5 decoder\r\noutputs = model(input_ids, decoder_input_ids=decoder_input_ids)\r\n\r\n# now next decoder token can be sampled from decoder outputs\r\nnext_decoder_input_ids = torch.argmax(outputs.last_hidden_state, dim=-1)\r\ndecoder_input_ids = torch.cat([decoder_input_ids, next_decoder_input_ids], dim=-1)\r\n\r\n# second call can reuse encoder outputs and continues auto-regressive language generation\r\n# model.generate(...) does that automatically\r\n\r\noutputs = model(encoder_outputs=outputs.decoder_past_key_values[0], decoder_input_ids=decoder_input_ids)\r\n\r\n# The same holds true for TF\r\n\r\nfrom transformers import TFT5Model, T5Tokenizer\r\nimport tensorflow as tf\r\n\r\nmodel = TFT5Model.from_pretrained(\"t5-small\") # your model of choice here\r\ntokenizer = T5Tokenizer.from_pretrained(\"t5-small\")\r\n\r\ninput_ids = tokenizer(\"This sentence is encoded and only has to be passed once.\", return_tensors=\"tf\").input_ids\r\ndecoder_input_ids = tf.convert_to_tensor([[model.config.decoder_start_token_id]], dtype=tf.dtypes.int32)\r\n\r\n# first call uses the start token id for T5 decoder\r\noutputs = model(input_ids, decoder_input_ids=decoder_input_ids)\r\n\r\n# now next decoder token can be sampled from decoder outputs\r\nnext_decoder_input_ids = tf.cast(tf.argmax(outputs[0], axis=-1), tf.dtypes.int32)\r\ndecoder_input_ids = tf.concat([decoder_input_ids, next_decoder_input_ids], axis=-1)\r\n\r\n# second call can reuse encoder outputs and continues auto-regressive language generation\r\n# model.generate(...) does that automatically\r\n\r\n# inputs (encoder inputs) has to be defined because of keras convention => it's not needed though after first \r\n# pass, so set to None\r\nmodel(None, encoder_outputs=outputs[1][0], decoder_input_ids=decoder_input_ids)\r\n```\r\n\r\nSo this means that the `decoder_input_ids` should not be the same as the `input_ids` (encoder inputs) but should correspond to the auto-regressively generated text by the model, that starts with the `decoder_start_token_id`. Hope this makes it a bit clearer.",
"Thanks @patrickvonplaten for the clarification.\r\nIn this case, I would recommend to update the documentation on:\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_t5.py#L972\r\n\r\nJust to clarify one thing, If I am only interested on feature extraction and not generation, in this case my code should looks like:\r\n\r\n```\r\nfrom transformers import TFT5Model, T5Tokenizer\r\nimport tensorflow as tf\r\n\r\nmodel = TFT5Model.from_pretrained(\"t5-small\") # your model of choice here\r\ntokenizer = T5Tokenizer.from_pretrained(\"t5-small\")\r\n\r\ninput_ids = tokenizer(\"This sentence is encoded and only has to be passed once.\", return_tensors=\"tf\").input_ids\r\ndecoder_input_ids = tf.convert_to_tensor([[model.config.decoder_start_token_id]], dtype=tf.dtypes.int32)\r\n\r\n# first call uses the start token id for T5 decoder\r\noutputs = model(input_ids, decoder_input_ids=decoder_input_ids)\r\n\r\nfeatures = outputs[2]\r\n```\r\n\r\nCorrect ?\r\n\r\nHopefully next week you will figure out the problem of Pytorch conversion, and let me know if you need our checkpoint, we can send it to you privately. ",
"Yes, we should update the example to clarify a bit...`T5` is an encoder-decoder model, so it's usually used for conditional generation IMO, so to model the following distribution:\r\n\r\n\r\n\r\nwith y being the `decoder_input_ids` and x being the `input_ids`. \r\n\r\nSo for feature extraction, is the idea to extract the features of the first `decoder_input_ids` token, so:\r\n\r\n\r\n\r\n=> so the hidden states of the first output? In this case, your code above is correct. \r\n\r\nCan you give me a bit more details of the task and how / with what objective (masked language modeling, ...) the model was trained? \r\n\r\n\r\n",
"Would be great if you can send me the weights of your trained model per mail (Weshare, google drive) then I check out the problem with the conversion next week ",
"Thanks @patrickvonplaten for the clarification. I am glad that my understanding is correct, and I am extracting the features from T5 correctly.\r\n\r\nRegarding your questions:\r\n1. Task: We are training different language models for the language of life \"protein sequences\".\r\n2. Objective: Span de-noising.\r\n3. Models: 11B and 3B models from T5.\r\n\r\nYou can find more details on my GitHub repo and my paper first version:\r\nhttps://github.com/agemagician/ProtTrans\r\nhttps://www.biorxiv.org/content/10.1101/2020.07.12.199554v2\r\n\r\nI have sent you an email with the 11b model weights, config file and spm model.\r\n\r\nI wish you a nice weekend and I hope you could figure out the problem next week.\r\n\r\nThanks in advance.",
"We have also tested the 3b parameter model with shared enc/dec weights and it did failed.\r\n\r\n```\r\n2020-08-03 18:20:42.993212: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nBuilding PyTorch model from configuration: T5Config {\r\n \"architectures\": [\r\n \"T5WithLMHeadModel\"\r\n ],\r\n \"d_ff\": 5120,\r\n \"d_kv\": 64,\r\n \"d_model\": 2048,\r\n \"decoder_start_token_id\": 0,\r\n \"dropout_rate\": 0.0,\r\n \"eos_token_id\": 1,\r\n \"initializer_factor\": 1.0,\r\n \"is_encoder_decoder\": true,\r\n \"layer_norm_epsilon\": 1e-06,\r\n \"model_type\": \"t5\",\r\n \"n_positions\": 512,\r\n \"num_heads\": 32,\r\n \"num_layers\": 24,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"relative_attention_num_buckets\": 32,\r\n \"vocab_size\": 128\r\n}\r\n\r\nINFO:transformers.modeling_t5:Converting TensorFlow checkpoint from /content/models/ProtT5-BFD\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/SelfAttention/relative_attention_bias with shape [32, 32]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/SelfAttention/relative_attention_bias_slot_v with shape [32, 32]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_000/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_001/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_002/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_003/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_004/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_005/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_006/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_007/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_008/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_009/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_010/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_011/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_012/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_013/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_014/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_015/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_016/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_017/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_018/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_019/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_020/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_021/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_022/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/dense_relu_dense/DenseReluDense/wi/kernel with shape [2048, 5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/dense_relu_dense/DenseReluDense/wi/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/dense_relu_dense/DenseReluDense/wi/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/dense_relu_dense/DenseReluDense/wo/kernel with shape [5120, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/dense_relu_dense/DenseReluDense/wo/kernel_slot_vc with shape [5120]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/dense_relu_dense/DenseReluDense/wo/kernel_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/dense_relu_dense/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/dense_relu_dense/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/enc_dec_attention/EncDecAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/enc_dec_attention/EncDecAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/enc_dec_attention/EncDecAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/enc_dec_attention/EncDecAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/enc_dec_attention/EncDecAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/enc_dec_attention/EncDecAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/enc_dec_attention/EncDecAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/enc_dec_attention/EncDecAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/enc_dec_attention/EncDecAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/enc_dec_attention/EncDecAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/enc_dec_attention/EncDecAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/enc_dec_attention/EncDecAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/enc_dec_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/enc_dec_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/self_attention/SelfAttention/k with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/self_attention/SelfAttention/k_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/self_attention/SelfAttention/k_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/self_attention/SelfAttention/o with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/self_attention/SelfAttention/o_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/self_attention/SelfAttention/o_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/self_attention/SelfAttention/q with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/self_attention/SelfAttention/q_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/self_attention/SelfAttention/q_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/self_attention/SelfAttention/v with shape [2048, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/self_attention/SelfAttention/v_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/self_attention/SelfAttention/v_slot_vr with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/self_attention/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/block_023/self_attention/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/logits/kernel with shape [2048, 128]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/logits/kernel_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/logits/kernel_slot_vr with shape [128]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/rms_norm/scale with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight decoder/rms_norm/scale_slot_v with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight global_step with shape []\r\nINFO:transformers.modeling_t5:Loading TF weight shared/embedding with shape [128, 2048]\r\nINFO:transformers.modeling_t5:Loading TF weight shared/embedding_slot_vc with shape [2048]\r\nINFO:transformers.modeling_t5:Loading TF weight shared/embedding_slot_vr with shape [128]\r\nINFO:transformers.modeling_t5:Skipping decoder/block_000/dense_relu_dense/DenseReluDense/wi/kernel\r\nINFO:transformers.modeling_t5:Skipping decoder/block_000/dense_relu_dense/DenseReluDense/wi/kernel\r\nINFO:transformers.modeling_t5:Skipping decoder/block_000/dense_relu_dense/DenseReluDense/wi/kernel\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/convert_t5_original_tf_checkpoint_to_pytorch.py\", line 61, in <module>\r\n convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.config_file, args.pytorch_dump_path)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/convert_t5_original_tf_checkpoint_to_pytorch.py\", line 36, in convert_tf_checkpoint_to_pytorch\r\n load_tf_weights_in_t5(model, config, tf_checkpoint_path)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py\", line 104, in load_tf_weights_in_t5\r\n pointer = getattr(pointer, \"weight\")\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 772, in __getattr__\r\n type(self).__name__, name))\r\ntorch.nn.modules.module.ModuleAttributeError: 'T5Block' object has no attribute 'weight'\r\n```",
"Ok, I will start with the 3B model then! Starting downloading the model now -> will take a look tomorrow (my internet connection is so slow at the moment that it'll take a couple of hours to download both models). \r\n\r\nJust to be sure: \r\nYou have trained the model using this repo: https://github.com/google-research/text-to-text-transfer-transformer right?\r\nAnd then you ran\r\n```\r\npython convert_t5_original_tf_checkpoint_to_pytorch.py\r\n```\r\n-> which did fail no? ",
"Thanks @patrickvonplaten.\r\n\r\nYes, I have used the official repo:\r\nhttps://github.com/google-research/text-to-text-transfer-transformer\r\n\r\nYes, I have used \"convert_t5_original_tf_checkpoint_to_pytorch.py\", which failed.\r\n\r\nAn important note:\r\n1) 11B used separate weights for encoder/decoder.\r\n2) 3B used a shared weights for encoder and decoder.",
"Hey @agemagician, \r\n\r\nI changed the conversion function to handle the 3B weights you send me. I have locally saved the converted pytorch model and you can do the conversion yourself using [this](https://github.com/huggingface/transformers/tree/adapt_t5_for_covid_19_3b) branch also with the changes shown in this PR: https://github.com/huggingface/transformers/pull/6388. \r\n\r\nA couple of things that might be important: \r\n1) I noticed that the input embeddings, called `shared/embedding` and output logits, called `decoder/logits/kernel` were different meaning that for the decoder the input and output embeddings are **not** shared. We usually share input and output embeddings in T5, so in your case I added a couple of lines to make sure that the correct input and output embeddings are converted and **not** tied. See these changes: https://github.com/huggingface/transformers/blob/abec44887d48e3ef3ccde50f3331f64aa2b3cbbc/src/transformers/modeling_t5.py#L1094 \r\n2) For convenience, I did not tie the weights of the encoder / decoder now, but just copied them. This meants that the resulting 3B T5 model has a size of ~8GB (2 times bigger because float32 is used instead of TPU's bfloat16 and again 2 times bigger because encoder and decoder are not shared). Because of the small word embedding matrix the model is still smaller than the official `t5-3b`: https://huggingface.co/t5-3b. You can reduce the size of your model, by calling `model.half()` and deleting `model.encoder` before doing `model.save_pretrained(\"./\")`. When loading you will have to write a short script though that instantiates the encoder and correctly sets the encoder weights. I would not recommend to do this, but instead just use the 8GB model in PyTorch.\r\n3) I did not test the conversion of the 11B model because I currently do not have the RAM capacity to do so (32GB is just not enough for such a large model). The script should work though. You should make sure to set `is_tied` to `False` in this line though: https://github.com/huggingface/transformers/blob/abec44887d48e3ef3ccde50f3331f64aa2b3cbbc/src/transformers/modeling_t5.py#L64. Also note that the conversion will automatically initialize the model in fp32, so that the PyTorch model alone requires 40GB of RAM. You can later save it as `model.half()`. I hope you have enough RAM space to be able to convert the model. It will be hard to debug for me without the necessary resources, so in case you experience errors, I would suggest to start debugging in this function: https://github.com/huggingface/transformers/blob/abec44887d48e3ef3ccde50f3331f64aa2b3cbbc/src/transformers/modeling_t5.py#L64 and see what names seem to be wrong.",
"Thanks a lot @patrickvonplaten for fixing the conversion script.\r\nWe highly appreciate your effort.\r\nThis will definitely make it easier for researchers to use our T5 protein language models.\r\n\r\nI have checked the following models:\r\n1. 3B shared.\r\n2. 3B non-shared.\r\n3. 11B non-shared.\r\n\r\nAll of them were converted correctly without any issue.\r\nHowever, following your example above for using the models, the decoder generates out of vocabulary ids.\r\nDespite, that both the vocab_size on the config file and the spm is set correctly to 128.\r\n\r\nalso I got the following warning when loading the model:\r\n```\r\nSome weights of the model checkpoint at xxx/3b-shared/pytorch/ were not used when initializing T5Model: ['lm_head.weight']\r\n- This IS expected if you are initializing T5Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\r\n- This IS NOT expected if you are initializing T5Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n```\r\n\r\n\r\nError:\r\n```\r\n---------------------------------------------------------------------------\r\nIndexError Traceback (most recent call last)\r\n<ipython-input-66-c65c815d3456> in <module>\r\n 2 # model.generate(...) does that automatically\r\n 3 \r\n----> 4 outputs = model(encoder_outputs=outputs.decoder_past_key_values[0], decoder_input_ids=decoder_input_ids)\r\n\r\n/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 720 result = self._slow_forward(*input, **kwargs)\r\n 721 else:\r\n--> 722 result = self.forward(*input, **kwargs)\r\n 723 for hook in itertools.chain(\r\n 724 _global_forward_hooks.values(),\r\n\r\n/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_t5.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, decoder_past_key_values, use_cache, inputs_embeds, decoder_inputs_embeds, head_mask, output_attentions, output_hidden_states, return_tuple, **kwargs)\r\n 995 \r\n 996 # Decode\r\n--> 997 decoder_outputs = self.decoder(\r\n 998 input_ids=decoder_input_ids,\r\n 999 attention_mask=decoder_attention_mask,\r\n\r\n/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 720 result = self._slow_forward(*input, **kwargs)\r\n 721 else:\r\n--> 722 result = self.forward(*input, **kwargs)\r\n 723 for hook in itertools.chain(\r\n 724 _global_forward_hooks.values(),\r\n\r\n/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_t5.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, past_key_value_states, use_cache, output_attentions, output_hidden_states, return_tuple)\r\n 701 if inputs_embeds is None:\r\n 702 assert self.embed_tokens is not None, \"You have to intialize the model with valid token embeddings\"\r\n--> 703 inputs_embeds = self.embed_tokens(input_ids)\r\n 704 \r\n 705 batch_size, seq_length = input_shape\r\n\r\n/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 720 result = self._slow_forward(*input, **kwargs)\r\n 721 else:\r\n--> 722 result = self.forward(*input, **kwargs)\r\n 723 for hook in itertools.chain(\r\n 724 _global_forward_hooks.values(),\r\n\r\n/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input)\r\n 122 \r\n 123 def forward(self, input: Tensor) -> Tensor:\r\n--> 124 return F.embedding(\r\n 125 input, self.weight, self.padding_idx, self.max_norm,\r\n 126 self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n\r\n/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)\r\n 1812 # remove once script supports set_grad_enabled\r\n 1813 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)\r\n-> 1814 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\n 1815 \r\n 1816 \r\n\r\nIndexError: index out of range in self\r\n```\r\n\r\nWhen looking into the decoder_input_ids, I found the following out of id values:\r\n`tensor([[ 0, 150]])`\r\n\r\nIt seems like the decoder ignores the actual vocab size.\r\n\r\nAny idea what is the problem here ?",
"I cannot reproduce the error with my weights. Can you add a code sample that helps me to reproduce the error?",
"I imagine you don't use a tokenizer, right?",
"Hi @patrickvonplaten ,\r\n\r\nThanks for reply and I hope you had a good week-end.\r\n\r\nI am using the tokenizer.\r\n\r\nI have created a Colab notebook which replicates my issue:\r\nhttps://colab.research.google.com/drive/1NccKDsKObYYTjf8EPcrUUDfs0PyaRsaN?usp=sharing\r\n\r\nYou can see at the last cell:\r\n`tensor([[ 0, 121, 121, 905]])`\r\n905 is out of vocab id.\r\n\r\nNote:\r\nThe model used in the Colab notebook is a dummy checkpoint for the 3B model (.i.e.: checkpoint 0). ",
"I see what the problem is - you should use `T5ForConditionalGeneration` instead of `T5Model`. The T5Model does not have the `lm_head` on top and therefore returns a matrix of length 1024 instead of 128. \r\nHere a notebook that runs without error:\r\nhttps://colab.research.google.com/drive/1u3VxPkFfrm0Z3f6SMIZTXbazFUhXNe0Z?usp=sharing. \r\n\r\nAlso, you have to make sure that the model you pass for the conversion script is a `T5ForConditionalGeneration` model and not only a `T5Model`. \r\n\r\nIt might be possible that the checkpoint of your google colab is not correct because of this. \r\nCan you also check this 3b checkpoint that I converted as explained above:\r\n\r\n```python \r\nconfig = T5Config.from_pretrained(\"patrickvonplaten/t5_3b_covid\")\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"patrickvonplaten/t5_3b_covid\")\r\n```\r\n\r\nAs explained above, the output word embedding (`lm_head`) of the 'patrickvonplaten/t5_3b_covid' model are different to the input embedding, so it would be good if you check this model to your checkpoint in the notebook to see which performs as expected (hopefully at least one of them). \r\n\r\nLet me know if this works. PS. I'm currently working on adding better functionality to share encoder and decoder weights for Seq2Seq models. Once this is done we can reduce this checkpoint to half its size.\r\n",
"Thanks @patrickvonplaten for the clarification. \r\nUsing T5ForConditionalGeneration did solve my issue, and now my models work as excepted.\r\n\r\nThanks a lot for optimizing the shared encoder/decoder weights.\r\nThis will be really helpful to allow large models 3B/11B to fit on a single GPU for inference.\r\nThis will be the last point to cover before closing this issue. \r\n\r\n\r\n"
] | 1,595 | 1,597 | 1,597 | CONTRIBUTOR | null | We are training a large scale T5-11B model using TPU V3-1024 for a Covid-19 project
We tried to convert the TensorFlow checkpoint to the Pytorch version, but it did fail.
Could you please help us to figure out the problem since this model is very important for Covid-19 research.
# 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
T5
Language I am using the model on (English, Chinese ...):
Protein Sequences
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. The config file:
```
{
"architectures": [
"T5WithLMHeadModel"
],
"d_ff": 65536,
"d_kv": 128,
"d_model": 1024,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "t5",
"n_positions": 512,
"num_heads": 128,
"num_layers": 24,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"vocab_size": 128
}
```
2. conversion command:
```
python convert_t5_original_tf_checkpoint_to_pytorch.py \
--tf_checkpoint_path xxx/tensorflow \
--config_file xxx/t5-11b-config.json \
--pytorch_dump_path xxx/pytorch
```
3. Error:
```
Building PyTorch model from configuration: T5Config {
"architectures": [
"T5WithLMHeadModel"
],
"d_ff": 65536,
"d_kv": 128,
"d_model": 1024,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "t5",
"n_positions": 512,
"num_heads": 128,
"num_layers": 24,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"vocab_size": 128
}
INFO:transformers.modeling_t5:Converting TensorFlow checkpoint from /mnt/lsf-nas-1/lsf/job/repo/elnaggar/prot-transformers/models/t5/tensorflow/bfd100
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/SelfAttention/relative_attention_bias with shape [128, 32]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/SelfAttention/relative_attention_bias_slot_v with shape [128, 32]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_000/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_001/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_002/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_003/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_004/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_005/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_006/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_007/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_008/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_009/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_010/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_011/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_012/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_013/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_014/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_015/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_016/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_017/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_018/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_019/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_020/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_021/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_022/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_001/EncDecAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_001/EncDecAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_001/EncDecAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_001/EncDecAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_001/EncDecAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_001/EncDecAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_001/EncDecAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_001/EncDecAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_001/EncDecAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_001/EncDecAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_001/EncDecAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_001/EncDecAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_002/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_002/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_002/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_002/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_002/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_002/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_002/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/block_023/layer_002/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight decoder/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/SelfAttention/relative_attention_bias with shape [128, 32]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/SelfAttention/relative_attention_bias_slot_v with shape [128, 32]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_000/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_001/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_002/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_003/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_004/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_005/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_006/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_007/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_008/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_009/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_010/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_012/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_013/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_014/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_015/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_016/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_017/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_018/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_019/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_020/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_021/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_022/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_000/SelfAttention/k with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_000/SelfAttention/k_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_000/SelfAttention/k_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_000/SelfAttention/o with shape [16384, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_000/SelfAttention/o_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_000/SelfAttention/o_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_000/SelfAttention/q with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_000/SelfAttention/q_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_000/SelfAttention/q_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_000/SelfAttention/v with shape [1024, 16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_000/SelfAttention/v_slot_vc with shape [16384]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_000/SelfAttention/v_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_000/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_000/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_001/DenseReluDense/wi/kernel with shape [1024, 65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_001/DenseReluDense/wi/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_001/DenseReluDense/wi/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_001/DenseReluDense/wo/kernel with shape [65536, 1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_001/DenseReluDense/wo/kernel_slot_vc with shape [65536]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_001/DenseReluDense/wo/kernel_slot_vr with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_001/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/block_023/layer_001/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/rms_norm/scale with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight encoder/rms_norm/scale_slot_v with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight global_step with shape []
INFO:transformers.modeling_t5:Loading TF weight shared/embedding with shape [128, 1024]
INFO:transformers.modeling_t5:Loading TF weight shared/embedding_slot_vc with shape [1024]
INFO:transformers.modeling_t5:Loading TF weight shared/embedding_slot_vr with shape [128]
INFO:transformers.modeling_t5:Transposing numpy weight of shape (1024, 16384) for ['decoder', 'block_000', 'layer_000', 'SelfAttention', 'k']
INFO:transformers.modeling_t5:Initialize PyTorch weight ['decoder', 'block_000', 'layer_000', 'SelfAttention', 'k']
INFO:transformers.modeling_t5:Skipping decoder/block_000/layer_000/SelfAttention/k_slot_vc
INFO:transformers.modeling_t5:Skipping decoder/block_000/layer_000/SelfAttention/k_slot_vr
INFO:transformers.modeling_t5:Transposing numpy weight of shape (16384, 1024) for ['decoder', 'block_000', 'layer_000', 'SelfAttention', 'o']
INFO:transformers.modeling_t5:Initialize PyTorch weight ['decoder', 'block_000', 'layer_000', 'SelfAttention', 'o']
INFO:transformers.modeling_t5:Skipping decoder/block_000/layer_000/SelfAttention/o_slot_vc
INFO:transformers.modeling_t5:Skipping decoder/block_000/layer_000/SelfAttention/o_slot_vr
INFO:transformers.modeling_t5:Transposing numpy weight of shape (1024, 16384) for ['decoder', 'block_000', 'layer_000', 'SelfAttention', 'q']
INFO:transformers.modeling_t5:Initialize PyTorch weight ['decoder', 'block_000', 'layer_000', 'SelfAttention', 'q']
INFO:transformers.modeling_t5:Skipping decoder/block_000/layer_000/SelfAttention/q_slot_vc
INFO:transformers.modeling_t5:Skipping decoder/block_000/layer_000/SelfAttention/q_slot_vr
INFO:transformers.modeling_t5:Transposing numpy weight of shape (128, 32) for ['decoder', 'block_000', 'layer_000', 'SelfAttention', 'relative_attention_bias']
INFO:transformers.modeling_t5:Initialize PyTorch weight ['decoder', 'block_000', 'layer_000', 'SelfAttention', 'relative_attention_bias']
INFO:transformers.modeling_t5:Skipping decoder/block_000/layer_000/SelfAttention/relative_attention_bias_slot_v
INFO:transformers.modeling_t5:Transposing numpy weight of shape (1024, 16384) for ['decoder', 'block_000', 'layer_000', 'SelfAttention', 'v']
INFO:transformers.modeling_t5:Initialize PyTorch weight ['decoder', 'block_000', 'layer_000', 'SelfAttention', 'v']
INFO:transformers.modeling_t5:Skipping decoder/block_000/layer_000/SelfAttention/v_slot_vc
INFO:transformers.modeling_t5:Skipping decoder/block_000/layer_000/SelfAttention/v_slot_vr
INFO:transformers.modeling_t5:Skipping decoder/block_000/layer_000/rms_norm/scale
Traceback (most recent call last):
File "xxx/convert_t5_original_tf_checkpoint_to_pytorch.py", line 61, in <module>
convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.config_file, args.pytorch_dump_path)
File "xxx/convert_t5_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_$ytorch
load_tf_weights_in_t5(model, config, tf_checkpoint_path)
File "xxx/modeling_t5.py", line 102, in load_tf_weights_in_t5
pointer = getattr(pointer, "weight")
File "xxx/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'T5LayerSelfAttention' object has no attribute 'weight'
```
## Expected behavior
T5 tensorflow model should be converted to pytorch model.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-4.15.0-101-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5851/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5850 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5850/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5850/comments | https://api.github.com/repos/huggingface/transformers/issues/5850/events | https://github.com/huggingface/transformers/issues/5850 | 659,396,272 | MDU6SXNzdWU2NTkzOTYyNzI= | 5,850 | pip install error | {
"login": "devjwsong",
"id": 16731987,
"node_id": "MDQ6VXNlcjE2NzMxOTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/16731987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devjwsong",
"html_url": "https://github.com/devjwsong",
"followers_url": "https://api.github.com/users/devjwsong/followers",
"following_url": "https://api.github.com/users/devjwsong/following{/other_user}",
"gists_url": "https://api.github.com/users/devjwsong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/devjwsong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/devjwsong/subscriptions",
"organizations_url": "https://api.github.com/users/devjwsong/orgs",
"repos_url": "https://api.github.com/users/devjwsong/repos",
"events_url": "https://api.github.com/users/devjwsong/events{/privacy}",
"received_events_url": "https://api.github.com/users/devjwsong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@devJWSong, how did you solve this?",
"Oh, actually I didn't solve it.\r\nI just installed downgraded version which is 2.11.0. and it worked.\r\nI still don't know the reason but I think it is the problem from my virtual environment setting since when I tried to install the recent version in the different environment, it worked...",
"its error occurs to me too.... could you give me another solution about that problems? ...",
"Any updates on this problem?",
"Try changing index-url and trusted-host in pip config.\r\nI had same issue with the environment with index-url='http://ftp.daumkakao.com/pypi/simple' and trusted-host='ftp.daumkakao.com', but everything worked well with the environment without such config.\r\n\r\n+)\r\ntry `pip install transformers -i https://pypi.python.org/simple`",
"Thank you! It worked. :)"
] | 1,595 | 1,678 | 1,599 | NONE | null | Hi, I tried to install transformers library via `pip install transformers` and I got tokenizer install error.
The error logs are as follows.
```
ERROR: Could not find a version that satisfies the requirement tokenizers==0.8.1.rc1 (from transformers) (from versions: 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.1.0, 0.1.1, 0.2.0, 0.2.1, 0.3.0, 0.4.0, 0.4.1, 0.4.2, 0.5.0, 0.5.1, 0.5.2, 0.6.0, 0.7.0, 0.8.0)
ERROR: No matching distribution found for tokenizers==0.8.1.rc1 (from transformers)
```
I googled about it but I couldn't find the way to solve it.
Is there I can do to handle this issue?
Thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5850/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5849 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5849/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5849/comments | https://api.github.com/repos/huggingface/transformers/issues/5849/events | https://github.com/huggingface/transformers/pull/5849 | 659,389,613 | MDExOlB1bGxSZXF1ZXN0NDUxMjM4Njc0 | 5,849 | Update shortcut name for reformer in pretrained_models.srt | {
"login": "SamuelCahyawijaya",
"id": 2826602,
"node_id": "MDQ6VXNlcjI4MjY2MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2826602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamuelCahyawijaya",
"html_url": "https://github.com/SamuelCahyawijaya",
"followers_url": "https://api.github.com/users/SamuelCahyawijaya/followers",
"following_url": "https://api.github.com/users/SamuelCahyawijaya/following{/other_user}",
"gists_url": "https://api.github.com/users/SamuelCahyawijaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamuelCahyawijaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamuelCahyawijaya/subscriptions",
"organizations_url": "https://api.github.com/users/SamuelCahyawijaya/orgs",
"repos_url": "https://api.github.com/users/SamuelCahyawijaya/repos",
"events_url": "https://api.github.com/users/SamuelCahyawijaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamuelCahyawijaya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I agree with the change but I think you probably broke the rst",
"> I agree with the change but I think you probably broke the rst\r\n\r\nAh, my bad, I have updated the rst file",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5849?src=pr&el=h1) Report\n> Merging [#5849](https://codecov.io/gh/huggingface/transformers/pull/5849?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d37c56bab8f7f1f1aa0b65be039516072254e77&el=desc) will **decrease** coverage by `1.19%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5849?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5849 +/- ##\n==========================================\n- Coverage 78.48% 77.28% -1.20% \n==========================================\n Files 146 146 \n Lines 26200 26200 \n==========================================\n- Hits 20563 20249 -314 \n- Misses 5637 5951 +314 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5849?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5849/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5849?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5849?src=pr&el=footer). Last update [9d37c56...1103b9e](https://codecov.io/gh/huggingface/transformers/pull/5849?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I check that this update has been included in the main repo directly, closing this PR"
] | 1,595 | 1,595 | 1,595 | NONE | null | Adding `google/` prefix on shortcut name for reformer in `pretrained_models.srt` according to mapping in `configuration_reformer.py` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5849/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5849",
"html_url": "https://github.com/huggingface/transformers/pull/5849",
"diff_url": "https://github.com/huggingface/transformers/pull/5849.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5849.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5848 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5848/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5848/comments | https://api.github.com/repos/huggingface/transformers/issues/5848/events | https://github.com/huggingface/transformers/issues/5848 | 659,380,460 | MDU6SXNzdWU2NTkzODA0NjA= | 5,848 | AttributeError: type object 'BertConfig' has no attribute 'pretrained_config_archive_map' | {
"login": "lethienhoa",
"id": 7143255,
"node_id": "MDQ6VXNlcjcxNDMyNTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7143255?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lethienhoa",
"html_url": "https://github.com/lethienhoa",
"followers_url": "https://api.github.com/users/lethienhoa/followers",
"following_url": "https://api.github.com/users/lethienhoa/following{/other_user}",
"gists_url": "https://api.github.com/users/lethienhoa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lethienhoa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lethienhoa/subscriptions",
"organizations_url": "https://api.github.com/users/lethienhoa/orgs",
"repos_url": "https://api.github.com/users/lethienhoa/repos",
"events_url": "https://api.github.com/users/lethienhoa/events{/privacy}",
"received_events_url": "https://api.github.com/users/lethienhoa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"it'd better to downgrade the version: pip install transformer==2.0.0 (which solves the compatible issue).",
"`$ pip install transformers==2.0.0`\r\n\r\nworks",
"Hi, is there any other approach other than downgrading?",
"try\r\n\r\n> `$from transformers import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP`\r\n\r\nor \r\n`$from transformers import BERT_PRETRAINED_CONFIG_ARCHIVE_MAP`",
"Hi! Could you let us know the use cases of using the archive maps?",
"\r\nHi, \r\n<img width=\"712\" alt=\"截屏2021-02-26 下午9 38 01\" src=\"https://user-images.githubusercontent.com/7147150/109307249-49d5f000-787b-11eb-91eb-f076046f180a.png\">\r\nI used it to provide the hint of cmd args configuration.",
"I see, thank you! The issue is that these archive maps are outdated and don't mean much since the model hub.\r\n\r\nIn that case, it would be better to have a way to see all checkpoints on the hub corresponding to that model architecture, right?",
"If anyone has already solved the issue please let me know. I am having same issue.\r\nhttps://hjlabs.in",
"`$from transformers import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP`\r\nthis solved my issue. Thank you everyone for support. you are great.\r\nhttps://hjlabs.in",
"For legacy codebases, you could store a static tuple of `ALL_MODELS` evaluated from a downgraded `transformers`:\r\n\r\n```python\r\nALL_MODELS = ('bert-base-uncased', 'bert-large-uncased', 'bert-base-cased', 'bert-large-cased', 'bert-base-multilingual-uncased', 'bert-base-multilingual-cased', 'bert-base-chinese', 'bert-base-german-cased', 'bert-large-uncased-whole-word-masking', 'bert-large-cased-whole-word-masking', 'bert-large-uncased-whole-word-masking-finetuned-squad', 'bert-large-cased-whole-word-masking-finetuned-squad', 'bert-base-cased-finetuned-mrpc', 'bert-base-german-dbmdz-cased', 'bert-base-german-dbmdz-uncased', 'bert-base-japanese', 'bert-base-japanese-whole-word-masking', 'bert-base-japanese-char', 'bert-base-japanese-char-whole-word-masking', 'bert-base-finnish-cased-v1', 'bert-base-finnish-uncased-v1', 'roberta-base', 'roberta-large', 'roberta-large-mnli', 'distilroberta-base', 'roberta-base-openai-detector', 'roberta-large-openai-detector', 'albert-base-v1', 'albert-large-v1', 'albert-xlarge-v1', 'albert-xxlarge-v1', 'albert-base-v2', 'albert-large-v2', 'albert-xlarge-v2', 'albert-xxlarge-v2')\r\n```"
] | 1,595 | 1,695 | 1,595 | NONE | null | Hi,
After changing from `BERT_PRETRAINED_MODEL_ARCHIVE_MAP` to `BERT_PRETRAINED_MODEL_ARCHIVE_LIST` in #5842 , I encountered another issue:
> File "main.py", line 24, in <genexpr>
> ALL_MODELS = sum((tuple(conf.pretrained_config_archive_map.keys()) for conf in (BertConfig, XLNetConfig)), ())
> AttributeError: type object 'BertConfig' has no attribute 'pretrained_config_archive_map'
Is it also a breaking change ?
What is the replacing name for `pretrained_config_archive_map` now ?
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5848/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5847 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5847/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5847/comments | https://api.github.com/repos/huggingface/transformers/issues/5847/events | https://github.com/huggingface/transformers/pull/5847 | 659,348,376 | MDExOlB1bGxSZXF1ZXN0NDUxMjAyNjg5 | 5,847 | Create README.md | {
"login": "jannesgg",
"id": 36601086,
"node_id": "MDQ6VXNlcjM2NjAxMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/36601086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jannesgg",
"html_url": "https://github.com/jannesgg",
"followers_url": "https://api.github.com/users/jannesgg/followers",
"following_url": "https://api.github.com/users/jannesgg/following{/other_user}",
"gists_url": "https://api.github.com/users/jannesgg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jannesgg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jannesgg/subscriptions",
"organizations_url": "https://api.github.com/users/jannesgg/orgs",
"repos_url": "https://api.github.com/users/jannesgg/repos",
"events_url": "https://api.github.com/users/jannesgg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jannesgg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5847?src=pr&el=h1) Report\n> Merging [#5847](https://codecov.io/gh/huggingface/transformers/pull/5847?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d37c56bab8f7f1f1aa0b65be039516072254e77&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5847?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5847 +/- ##\n=======================================\n Coverage 78.48% 78.48% \n=======================================\n Files 146 146 \n Lines 26200 26200 \n=======================================\n+ Hits 20563 20564 +1 \n+ Misses 5637 5636 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5847?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5847/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5847?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5847?src=pr&el=footer). Last update [9d37c56...07227e0](https://codecov.io/gh/huggingface/transformers/pull/5847?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5847/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5847",
"html_url": "https://github.com/huggingface/transformers/pull/5847",
"diff_url": "https://github.com/huggingface/transformers/pull/5847.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5847.patch",
"merged_at": 1595009034000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5846 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5846/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5846/comments | https://api.github.com/repos/huggingface/transformers/issues/5846/events | https://github.com/huggingface/transformers/issues/5846 | 659,332,126 | MDU6SXNzdWU2NTkzMzIxMjY= | 5,846 | Decode [UNK] from tokenizer | {
"login": "RodSernaPerez",
"id": 37450380,
"node_id": "MDQ6VXNlcjM3NDUwMzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/37450380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RodSernaPerez",
"html_url": "https://github.com/RodSernaPerez",
"followers_url": "https://api.github.com/users/RodSernaPerez/followers",
"following_url": "https://api.github.com/users/RodSernaPerez/following{/other_user}",
"gists_url": "https://api.github.com/users/RodSernaPerez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RodSernaPerez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RodSernaPerez/subscriptions",
"organizations_url": "https://api.github.com/users/RodSernaPerez/orgs",
"repos_url": "https://api.github.com/users/RodSernaPerez/repos",
"events_url": "https://api.github.com/users/RodSernaPerez/events{/privacy}",
"received_events_url": "https://api.github.com/users/RodSernaPerez/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@RodSernaPerez I am not sure what you are doing, but the model will do that if it doesn't find your word in the vocab.txt file. You have to use `BertTokenizer` before training, so that it splits all of the words that are not in the vocab file into word tokens. I think it will mark words that it wasn't able to tokenize with `[UNK]`. As to your question, what do you mean by index? I am assuming you know where the `[UNK]` was in white space tokenized word. You can concatenate BERT word tokens first and then you can try to find word marked with `[UNK]` in that list of concatenated tokens.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,602 | 1,602 | NONE | null | I am using Bert for NER and I have a problem when the tokenizer uses the token [UNK] to mask a word. How can I easily know which was the original work given its index? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5846/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5845 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5845/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5845/comments | https://api.github.com/repos/huggingface/transformers/issues/5845/events | https://github.com/huggingface/transformers/pull/5845 | 659,235,801 | MDExOlB1bGxSZXF1ZXN0NDUxMTA3MDQ3 | 5,845 | Added model card for neuraly/bert-base-italian-cased-sentiment | {
"login": "gianpy15",
"id": 26765244,
"node_id": "MDQ6VXNlcjI2NzY1MjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/26765244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gianpy15",
"html_url": "https://github.com/gianpy15",
"followers_url": "https://api.github.com/users/gianpy15/followers",
"following_url": "https://api.github.com/users/gianpy15/following{/other_user}",
"gists_url": "https://api.github.com/users/gianpy15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gianpy15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gianpy15/subscriptions",
"organizations_url": "https://api.github.com/users/gianpy15/orgs",
"repos_url": "https://api.github.com/users/gianpy15/repos",
"events_url": "https://api.github.com/users/gianpy15/events{/privacy}",
"received_events_url": "https://api.github.com/users/gianpy15/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5845?src=pr&el=h1) Report\n> Merging [#5845](https://codecov.io/gh/huggingface/transformers/pull/5845?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0b6c255a95368163d2b1d37635e5ce5bdd1b9423&el=desc) will **decrease** coverage by `1.15%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5845?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5845 +/- ##\n==========================================\n- Coverage 78.50% 77.35% -1.16% \n==========================================\n Files 146 146 \n Lines 26049 26049 \n==========================================\n- Hits 20450 20150 -300 \n- Misses 5599 5899 +300 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5845?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5845?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5845?src=pr&el=footer). Last update [0b6c255...cf779f6](https://codecov.io/gh/huggingface/transformers/pull/5845?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hello guys, nice job. The tutorial seems broken to me.\r\nhttps://github.com/huggingface/transformers/issues/14194"
] | 1,594 | 1,635 | 1,595 | CONTRIBUTOR | null | Hello, we are making this pull request to add our Italian sentiment model to your repository of transformers.
Thank you again for hosting the models :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5845/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5845",
"html_url": "https://github.com/huggingface/transformers/pull/5845",
"diff_url": "https://github.com/huggingface/transformers/pull/5845.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5845.patch",
"merged_at": 1595008250000
} |
https://api.github.com/repos/huggingface/transformers/issues/5844 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5844/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5844/comments | https://api.github.com/repos/huggingface/transformers/issues/5844/events | https://github.com/huggingface/transformers/issues/5844 | 659,208,624 | MDU6SXNzdWU2NTkyMDg2MjQ= | 5,844 | problem about Custom class inheriting from <TFBertPreTrainedModel> | {
"login": "jianrui1995",
"id": 20520524,
"node_id": "MDQ6VXNlcjIwNTIwNTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/20520524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jianrui1995",
"html_url": "https://github.com/jianrui1995",
"followers_url": "https://api.github.com/users/jianrui1995/followers",
"following_url": "https://api.github.com/users/jianrui1995/following{/other_user}",
"gists_url": "https://api.github.com/users/jianrui1995/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jianrui1995/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jianrui1995/subscriptions",
"organizations_url": "https://api.github.com/users/jianrui1995/orgs",
"repos_url": "https://api.github.com/users/jianrui1995/repos",
"events_url": "https://api.github.com/users/jianrui1995/events{/privacy}",
"received_events_url": "https://api.github.com/users/jianrui1995/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"We are trying to move these kinds of questions (which are not obvious bugs, but question for customized use cases) more and more to https://discuss.huggingface.co/. Would you mind posting it there again with your sample code:\r\n```python\r\nclass Model(transformers.TFBertPreTrainedModel):\r\n def __init__(self,config, *inputs, **kwargs):\r\n super(Model,self).__init__(config, *inputs, **kwargs)\r\n self.bert = transformers.TFBertMainLayer(config,name=\"bert\")\r\n \r\n\r\n @tf.function\r\n def call(self, inputs, training=None, mask=None,**kwargs):\r\n out = self.bert(inputs)\r\n print(out)\r\n return out\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,601 | 1,601 | NONE | null | # ❓ Questions & Help
## Details
<!-- Description of your issue -->
I constructed class inheriting from TFBertPreTrainedModel. as follows:
```
class Model(transformers.TFBertPreTrainedModel):
def __init__(self,config, *inputs, **kwargs):
super(Model,self).__init__(config, *inputs, **kwargs)
self.bert = transformers.TFBertMainLayer(config,name="bert")
@tf.function
def call(self, inputs, training=None, mask=None,**kwargs):
out = self.bert(inputs)
print(out)
return out
```
main progress as follow:
```
if __name__ == "__main__":
tokenizer = transformers.BertTokenizer("model/chinese_L-12_H-768_A-12/vocab.txt")
text_2 = tokenizer.batch_encode_plus(["你买啊,买了就是成都人", "你来啊,来了就是深圳人"], max_length=20, pad_to_max_length=True)
print(text_2)
model = Model.from_pretrained("bert-base-chinese")
out = model([tf.convert_to_tensor(text_2["input_ids"]),tf.convert_to_tensor(text_2['attention_mask'])])
```
unfortunately,the print on console was different for what I expect when I run this program. I expect that the out of **print(out)** is a tensor with size [2,20,768], but in fact, the print was run three times, outputs is **(<tf.Tensor 'bert/encoder/layer_._11/output/LayerNorm/batchnorm/add_1:0' shape=(3, 5, 768) dtype=float32>, <tf.Tensor 'bert/pooler/dense/Tanh:0' shape=(3, 768) dtype=float32>)** , **(<tf.Tensor 'bert/encoder/layer_._11/output/LayerNorm/batchnorm/add_1:0' shape=(3, 5, 768) dtype=float32>, <tf.Tensor 'bert/pooler/dense/Tanh:0' shape=(3, 768) dtype=float32>)** and **(<tf.Tensor 'bert/encoder/layer_._11/output/LayerNorm/batchnorm/add_1:0' shape=(2, 20, 768) dtype=float32>, <tf.Tensor 'bert/pooler/dense/Tanh:0' shape=(2, 768) dtype=float32>)**
My question are 1) why 2)how could I get the last outputs for following layers
I am looking forward your help, thanks
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5844/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5843 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5843/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5843/comments | https://api.github.com/repos/huggingface/transformers/issues/5843/events | https://github.com/huggingface/transformers/pull/5843 | 659,116,991 | MDExOlB1bGxSZXF1ZXN0NDUxMDAxNzkz | 5,843 | Added model card for neuraly/bert-base-italian-cased-sentiment | {
"login": "gianpy15",
"id": 26765244,
"node_id": "MDQ6VXNlcjI2NzY1MjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/26765244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gianpy15",
"html_url": "https://github.com/gianpy15",
"followers_url": "https://api.github.com/users/gianpy15/followers",
"following_url": "https://api.github.com/users/gianpy15/following{/other_user}",
"gists_url": "https://api.github.com/users/gianpy15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gianpy15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gianpy15/subscriptions",
"organizations_url": "https://api.github.com/users/gianpy15/orgs",
"repos_url": "https://api.github.com/users/gianpy15/repos",
"events_url": "https://api.github.com/users/gianpy15/events{/privacy}",
"received_events_url": "https://api.github.com/users/gianpy15/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5843?src=pr&el=h1) Report\n> Merging [#5843](https://codecov.io/gh/huggingface/transformers/pull/5843?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0b6c255a95368163d2b1d37635e5ce5bdd1b9423&el=desc) will **decrease** coverage by `0.39%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5843?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5843 +/- ##\n==========================================\n- Coverage 78.50% 78.11% -0.40% \n==========================================\n Files 146 146 \n Lines 26049 26049 \n==========================================\n- Hits 20450 20347 -103 \n- Misses 5599 5702 +103 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5843?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5843?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5843?src=pr&el=footer). Last update [0b6c255...d6d30df](https://codecov.io/gh/huggingface/transformers/pull/5843?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | Hello, we are making this pull request to add our Italian sentiment model to your repository of transformers.
Thank you again for hosting the models :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5843/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5843",
"html_url": "https://github.com/huggingface/transformers/pull/5843",
"diff_url": "https://github.com/huggingface/transformers/pull/5843.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5843.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5842 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5842/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5842/comments | https://api.github.com/repos/huggingface/transformers/issues/5842/events | https://github.com/huggingface/transformers/issues/5842 | 659,098,169 | MDU6SXNzdWU2NTkwOTgxNjk= | 5,842 | ImportError: cannot import name 'BERT_PRETRAINED_MODEL_ARCHIVE_MAP' from 'transformers' | {
"login": "lethienhoa",
"id": 7143255,
"node_id": "MDQ6VXNlcjcxNDMyNTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7143255?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lethienhoa",
"html_url": "https://github.com/lethienhoa",
"followers_url": "https://api.github.com/users/lethienhoa/followers",
"following_url": "https://api.github.com/users/lethienhoa/following{/other_user}",
"gists_url": "https://api.github.com/users/lethienhoa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lethienhoa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lethienhoa/subscriptions",
"organizations_url": "https://api.github.com/users/lethienhoa/orgs",
"repos_url": "https://api.github.com/users/lethienhoa/repos",
"events_url": "https://api.github.com/users/lethienhoa/events{/privacy}",
"received_events_url": "https://api.github.com/users/lethienhoa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yeah, there was a breaking change. The name has been changed to `BERT_PRETRAINED_MODEL_ARCHIVE_LIST`.\r\n\r\nSee https://github.com/huggingface/transformers/pull/4636.",
"ImportError: cannot import name 'BertweetTokenizer' from 'transformers' (/opt/conda/lib/python3.7/site-packages/transformers/__init__.py)\r\n",
"ImportError: cannot import name 'SquadExample' from 'transformers' (unknown location)\r\n\r\nI get this error when I try to import QuestionAnsweringModel from simpletransformers.question_answering\r\n\r\nAny help would be appreciated!"
] | 1,594 | 1,615 | 1,594 | NONE | null | Hi,
I'm rerunning the code BERT-E2E-ABSA from https://github.com/lixin4ever/BERT-E2E-ABSA, which is based on transformers 2.0.0
I installed the latest transformers 3.0.2 (both from pip and souce) and I have an error that I can not import BERT_PRETRAINED_MODEL_ARCHIVE_MAP.
This simple command will result in errors below:
> python -c "from transformers import BERT_PRETRAINED_MODEL_ARCHIVE_MAP"
> Traceback (most recent call last):
> File "<string>", line 1, in <module>
> ImportError: cannot import name 'BERT_PRETRAINED_MODEL_ARCHIVE_MAP' from 'transformers' (/home/hoa/transformers/src/transformers/__init__.py)
Transformer is well installed as it can do the following:
> python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I hate you'))"
> [{'label': 'NEGATIVE', 'score': 0.9991129040718079}]
Do you know why ? Did I miss something ?
- `transformers` version: 3.0.2
- Platform: Linux-5.4.0-1019-gcp-x86_64-with-debian-bullseye-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.2.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
- Other configurations: Ubuntu 20.04, Python 3.7, CUDA 11
Note: I even installed pytorch on cpu laptop and rerun, still had the same issue !
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5842/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5841 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5841/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5841/comments | https://api.github.com/repos/huggingface/transformers/issues/5841/events | https://github.com/huggingface/transformers/pull/5841 | 659,065,209 | MDExOlB1bGxSZXF1ZXN0NDUwOTU2MDUw | 5,841 | [Model card] Bert2Bert | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,594 | 1,594 | 1,594 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5841/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5841",
"html_url": "https://github.com/huggingface/transformers/pull/5841",
"diff_url": "https://github.com/huggingface/transformers/pull/5841.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5841.patch",
"merged_at": 1594978917000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5840 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5840/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5840/comments | https://api.github.com/repos/huggingface/transformers/issues/5840/events | https://github.com/huggingface/transformers/pull/5840 | 659,061,211 | MDExOlB1bGxSZXF1ZXN0NDUwOTUyNTA1 | 5,840 | [WIP - Don't merge][EncoderDecoder] Extend Trainer for Bert2Bert | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5840?src=pr&el=h1) Report\n> Merging [#5840](https://codecov.io/gh/huggingface/transformers/pull/5840?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `1.26%`.\n> The diff coverage is `41.17%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5840?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5840 +/- ##\n==========================================\n- Coverage 77.79% 76.53% -1.27% \n==========================================\n Files 145 145 \n Lines 25355 25365 +10 \n==========================================\n- Hits 19726 19414 -312 \n- Misses 5629 5951 +322 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5840?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5840/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.41% <37.50%> (-0.55%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5840/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.77% <100.00%> (+0.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5840/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5840/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5840?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5840?src=pr&el=footer). Last update [fa5423b...d9f6d07](https://codecov.io/gh/huggingface/transformers/pull/5840?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,651 | 1,601 | MEMBER | null | This PR draft should be used to train a Bert2Bert model.
It's not ready to be merged yet. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5840/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5840",
"html_url": "https://github.com/huggingface/transformers/pull/5840",
"diff_url": "https://github.com/huggingface/transformers/pull/5840.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5840.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5839 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5839/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5839/comments | https://api.github.com/repos/huggingface/transformers/issues/5839/events | https://github.com/huggingface/transformers/pull/5839 | 659,036,396 | MDExOlB1bGxSZXF1ZXN0NDUwOTMwNjc3 | 5,839 | Created model card for my extreme summarization model | {
"login": "SchizoidBat",
"id": 40696362,
"node_id": "MDQ6VXNlcjQwNjk2MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/40696362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SchizoidBat",
"html_url": "https://github.com/SchizoidBat",
"followers_url": "https://api.github.com/users/SchizoidBat/followers",
"following_url": "https://api.github.com/users/SchizoidBat/following{/other_user}",
"gists_url": "https://api.github.com/users/SchizoidBat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SchizoidBat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SchizoidBat/subscriptions",
"organizations_url": "https://api.github.com/users/SchizoidBat/orgs",
"repos_url": "https://api.github.com/users/SchizoidBat/repos",
"events_url": "https://api.github.com/users/SchizoidBat/events{/privacy}",
"received_events_url": "https://api.github.com/users/SchizoidBat/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5839?src=pr&el=h1) Report\n> Merging [#5839](https://codecov.io/gh/huggingface/transformers/pull/5839?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3d9556a72b6709d5fa09bf7bce7404158c169d21&el=desc) will **decrease** coverage by `0.72%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5839?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5839 +/- ##\n==========================================\n- Coverage 78.11% 77.38% -0.73% \n==========================================\n Files 146 146 \n Lines 26049 26049 \n==========================================\n- Hits 20347 20159 -188 \n- Misses 5702 5890 +188 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5839?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5839/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5839?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5839?src=pr&el=footer). Last update [3d9556a...05a0062](https://codecov.io/gh/huggingface/transformers/pull/5839?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5839/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5839",
"html_url": "https://github.com/huggingface/transformers/pull/5839",
"diff_url": "https://github.com/huggingface/transformers/pull/5839.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5839.patch",
"merged_at": 1595318098000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5838 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5838/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5838/comments | https://api.github.com/repos/huggingface/transformers/issues/5838/events | https://github.com/huggingface/transformers/pull/5838 | 659,010,661 | MDExOlB1bGxSZXF1ZXN0NDUwOTA3ODE4 | 5,838 | Created model card for my summarization model | {
"login": "SchizoidBat",
"id": 40696362,
"node_id": "MDQ6VXNlcjQwNjk2MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/40696362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SchizoidBat",
"html_url": "https://github.com/SchizoidBat",
"followers_url": "https://api.github.com/users/SchizoidBat/followers",
"following_url": "https://api.github.com/users/SchizoidBat/following{/other_user}",
"gists_url": "https://api.github.com/users/SchizoidBat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SchizoidBat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SchizoidBat/subscriptions",
"organizations_url": "https://api.github.com/users/SchizoidBat/orgs",
"repos_url": "https://api.github.com/users/SchizoidBat/repos",
"events_url": "https://api.github.com/users/SchizoidBat/events{/privacy}",
"received_events_url": "https://api.github.com/users/SchizoidBat/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5838?src=pr&el=h1) Report\n> Merging [#5838](https://codecov.io/gh/huggingface/transformers/pull/5838?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3d9556a72b6709d5fa09bf7bce7404158c169d21&el=desc) will **increase** coverage by `0.39%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5838?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5838 +/- ##\n==========================================\n+ Coverage 78.11% 78.50% +0.39% \n==========================================\n Files 146 146 \n Lines 26049 26049 \n==========================================\n+ Hits 20347 20450 +103 \n+ Misses 5702 5599 -103 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5838?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5838/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5838/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5838/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5838/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5838/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5838?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5838?src=pr&el=footer). Last update [3d9556a...b6dd51b](https://codecov.io/gh/huggingface/transformers/pull/5838?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thank you!"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5838/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5838",
"html_url": "https://github.com/huggingface/transformers/pull/5838",
"diff_url": "https://github.com/huggingface/transformers/pull/5838.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5838.patch",
"merged_at": 1595318055000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5837 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5837/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5837/comments | https://api.github.com/repos/huggingface/transformers/issues/5837/events | https://github.com/huggingface/transformers/pull/5837 | 658,945,130 | MDExOlB1bGxSZXF1ZXN0NDUwODUwMTY3 | 5,837 | [seq2seq] MAX_LEN env var for MT commands | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5837?src=pr&el=h1) Report\n> Merging [#5837](https://codecov.io/gh/huggingface/transformers/pull/5837?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/529850ae7bca0ff388778c3c0d66240834cf56c3&el=desc) will **decrease** coverage by `1.18%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5837?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5837 +/- ##\n==========================================\n- Coverage 78.48% 77.30% -1.19% \n==========================================\n Files 146 146 \n Lines 26200 26049 -151 \n==========================================\n- Hits 20563 20137 -426 \n- Misses 5637 5912 +275 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5837?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `87.86% <0.00%> (-2.71%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5837/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5837?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5837?src=pr&el=footer). Last update [529850a...ea4f6b0](https://codecov.io/gh/huggingface/transformers/pull/5837?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5837/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5837",
"html_url": "https://github.com/huggingface/transformers/pull/5837",
"diff_url": "https://github.com/huggingface/transformers/pull/5837.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5837.patch",
"merged_at": 1595040692000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5836 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5836/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5836/comments | https://api.github.com/repos/huggingface/transformers/issues/5836/events | https://github.com/huggingface/transformers/pull/5836 | 658,886,681 | MDExOlB1bGxSZXF1ZXN0NDUwNzk4ODY4 | 5,836 | [not sure whether to] pin torch<=1.5.1 | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5836?src=pr&el=h1) Report\n> Merging [#5836](https://codecov.io/gh/huggingface/transformers/pull/5836?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d088d744adb4e5aa45262a34acab3ae9e81de169&el=desc) will **decrease** coverage by `0.98%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5836?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5836 +/- ##\n==========================================\n- Coverage 78.10% 77.12% -0.99% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n- Hits 20344 20088 -256 \n- Misses 5703 5959 +256 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5836?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5836/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5836?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5836?src=pr&el=footer). Last update [d088d74...dd8448b](https://codecov.io/gh/huggingface/transformers/pull/5836?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5836/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5836",
"html_url": "https://github.com/huggingface/transformers/pull/5836",
"diff_url": "https://github.com/huggingface/transformers/pull/5836.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5836.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5835 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5835/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5835/comments | https://api.github.com/repos/huggingface/transformers/issues/5835/events | https://github.com/huggingface/transformers/pull/5835 | 658,879,378 | MDExOlB1bGxSZXF1ZXN0NDUwNzkyNDQw | 5,835 | solve Illegal seek in wandb teardown (test suite) | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5835?src=pr&el=h1) Report\n> Merging [#5835](https://codecov.io/gh/huggingface/transformers/pull/5835?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0735def8e1200ed45a2c33a075bc1595b12ef56a&el=desc) will **decrease** coverage by `1.65%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5835?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5835 +/- ##\n==========================================\n- Coverage 80.08% 78.43% -1.66% \n==========================================\n Files 153 146 -7 \n Lines 27984 26012 -1972 \n==========================================\n- Hits 22412 20402 -2010 \n- Misses 5572 5610 +38 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5835?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5835/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-46.86%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5835/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `41.66% <0.00%> (-40.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5835/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-32.51%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5835/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `85.71% <0.00%> (-14.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5835/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `76.47% <0.00%> (-11.93%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5835/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `80.98% <0.00%> (-10.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5835/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `87.70% <0.00%> (-7.98%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5835/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <0.00%> (-7.88%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5835/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.80% <0.00%> (-5.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5835/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `74.41% <0.00%> (-4.20%)` | :arrow_down: |\n| ... and [81 more](https://codecov.io/gh/huggingface/transformers/pull/5835/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5835?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5835?src=pr&el=footer). Last update [0735def...9d566b1](https://codecov.io/gh/huggingface/transformers/pull/5835?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"CI doesn't have wandb installed.",
"I get an identical issue with wandb shutdown in other places as well, e.g. when running:\r\n\r\n```pytest examples/test_examples.py::ExamplesTests::test_run_glue```\r\n\r\nSo far the solution seems to be to disable wandb. This works:\r\n\r\n```\r\nWANDB_DISABLED=true pytest examples/test_examples.py::ExamplesTests::test_run_glue\r\n```\r\n\r\nUnless wandb integration is being tested perhaps disabling it globally will solve this everywhere?\r\n",
"pinging @borisdayma in case\r\n\r\n",
"The wandb cli is being updated and will have a noop mode that we can use for easier testing.\r\nFor now `WANDB_DISABLED=true` should work.",
"> The wandb cli is being updated and will have a noop mode that we can use for easier testing.\r\n\r\nwould it be possible to ping this issue when it happened, so that we could resolve this? Thank you very much, @borisdayma ",
"Sounds good, I'll keep an eye on it!",
"Seems to have been resolved upstream (wandb) - I can no longer reproduce this problem, closing"
] | 1,594 | 1,599 | 1,599 | CONTRIBUTOR | null | somehow wandb teardown fails in `tests/test_trainer.py::TrainerIntegrationTest::test_trainer_eval_mrpc`:
```
pytest -n 1 tests/test_trainer.py::TrainerIntegrationTest::test_trainer_eval_mrpc
====================================================================== test session starts =======================================================================
platform linux -- Python 3.7.5, pytest-5.4.3, py-1.9.0, pluggy-0.13.1
rootdir: /mnt/nvme1/code/huggingface/transformers-tests-1
plugins: hypothesis-5.5.4, filter-subpackage-0.1.1, arraydiff-0.3, flaky-3.6.1, ipynb-1.1.1.dev0, cov-2.10.0, astropy-header-0.1.2, forked-1.2.0, doctestplus-0.5.0, openfiles-0.4.0, remotedata-0.3.2, xdist-1.32.0
gw0 [1]
FE [100%]
wandb: Waiting for W&B process to finish, PID 19525
============================================================================= ERRORS =============================================================================
_______________________________________________ ERROR at teardown of TrainerIntegrationTest.test_trainer_eval_mrpc _______________________________________________
[gw0] linux -- Python 3.7.5 /home/stas/anaconda3/envs/main/bin/python
self = <contextlib._GeneratorContextManager object at 0x7efc47610690>, type = None, value = None, traceback = None
def __exit__(self, type, value, traceback):
if type is None:
try:
> next(self.gen)
E OSError: [Errno 29] Illegal seek
/home/stas/anaconda3/envs/main/lib/python3.7/contextlib.py:119: OSError
============================================================================ FAILURES ============================================================================
_________________________________________________________ TrainerIntegrationTest.test_trainer_eval_mrpc __________________________________________________________
[gw0] linux -- Python 3.7.5 /home/stas/anaconda3/envs/main/bin/python
self = <contextlib._GeneratorContextManager object at 0x7efc476bbd50>, type = None, value = None, traceback = None
def __exit__(self, type, value, traceback):
if type is None:
try:
> next(self.gen)
E OSError: [Errno 29] Illegal seek
/home/stas/anaconda3/envs/main/lib/python3.7/contextlib.py:119: OSError
==================================================================== short test summary info =====================================================================
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_eval_mrpc - OSError: [Errno 29] Illegal seek
FAILED tests/test_trainer.py::TrainerIntegrationTest::test_trainer_eval_mrpc - OSError: [Errno 29] Illegal seek
```
This is a possible solution, based on this solution
https://github.com/wandb/client/issues/1138#issuecomment-654943065 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5835/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5835",
"html_url": "https://github.com/huggingface/transformers/pull/5835",
"diff_url": "https://github.com/huggingface/transformers/pull/5835.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5835.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5834 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5834/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5834/comments | https://api.github.com/repos/huggingface/transformers/issues/5834/events | https://github.com/huggingface/transformers/pull/5834 | 658,876,698 | MDExOlB1bGxSZXF1ZXN0NDUwNzkwMDUx | 5,834 | Trainer support for iterabledataset | {
"login": "Pradhy729",
"id": 49659913,
"node_id": "MDQ6VXNlcjQ5NjU5OTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/49659913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pradhy729",
"html_url": "https://github.com/Pradhy729",
"followers_url": "https://api.github.com/users/Pradhy729/followers",
"following_url": "https://api.github.com/users/Pradhy729/following{/other_user}",
"gists_url": "https://api.github.com/users/Pradhy729/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pradhy729/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pradhy729/subscriptions",
"organizations_url": "https://api.github.com/users/Pradhy729/orgs",
"repos_url": "https://api.github.com/users/Pradhy729/repos",
"events_url": "https://api.github.com/users/Pradhy729/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pradhy729/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5834?src=pr&el=h1) Report\n> Merging [#5834](https://codecov.io/gh/huggingface/transformers/pull/5834?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d088d744adb4e5aa45262a34acab3ae9e81de169&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `40.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5834?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5834 +/- ##\n==========================================\n+ Coverage 78.10% 78.11% +0.01% \n==========================================\n Files 146 146 \n Lines 26047 26053 +6 \n==========================================\n+ Hits 20344 20352 +8 \n+ Misses 5703 5701 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5834?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.46% <40.00%> (+0.61%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `80.61% <0.00%> (+3.06%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5834?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5834?src=pr&el=footer). Last update [d088d74...edd876a](https://codecov.io/gh/huggingface/transformers/pull/5834?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I will add a test for it.\r\n",
"@sshleifer --> I have made the changes you recommended and added a test that should fail without this change. Please review and let me know if there is anything else needed.",
"OK Will make the changes. Also do you know where to place the dataset class to prevent failure in the tf tests?",
"I've made the changes. Please review and feel free to edit as needed and merge",
"Should be good to go. I removed the WIP. Thanks.",
"LGTM but will wait for review from @sgugger too"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | To fix --> #5829 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5834/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5834",
"html_url": "https://github.com/huggingface/transformers/pull/5834",
"diff_url": "https://github.com/huggingface/transformers/pull/5834.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5834.patch",
"merged_at": 1595250458000
} |
https://api.github.com/repos/huggingface/transformers/issues/5833 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5833/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5833/comments | https://api.github.com/repos/huggingface/transformers/issues/5833/events | https://github.com/huggingface/transformers/issues/5833 | 658,850,717 | MDU6SXNzdWU2NTg4NTA3MTc= | 5,833 | OpenAI GPT NoPaddingTokenFastTokenizerMatchingTest test fails | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"looks like it has been fixed."
] | 1,594 | 1,600 | 1,600 | CONTRIBUTOR | null | # 🐛 Bug
## Information
`tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_all_tokenizers`
fails
## To reproduce
```
pytest -n 1 tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_all_tokenizers
================================================================ test session starts =================================================================
platform linux -- Python 3.7.5, pytest-5.4.3, py-1.9.0, pluggy-0.13.1
rootdir: /mnt/nvme1/code/huggingface/transformers-master
plugins: hypothesis-5.5.4, filter-subpackage-0.1.1, arraydiff-0.3, flaky-3.6.1, ipynb-1.1.1.dev0, cov-2.10.0, astropy-header-0.1.2, forked-1.2.0, doctestplus-0.5.0, openfiles-0.4.0, remotedata-0.3.2, xdist-1.32.0
gw0 [1]
F [100%]
====================================================================== FAILURES ======================================================================
____________________________________________ NoPaddingTokenFastTokenizerMatchingTest.test_all_tokenizers _____________________________________________
[gw0] linux -- Python 3.7.5 /home/stas/anaconda3/envs/main/bin/python
self = <tests.test_tokenization_fast.NoPaddingTokenFastTokenizerMatchingTest testMethod=test_all_tokenizers>
def test_all_tokenizers(self):
for tok_case in self.TOKENIZERS_CLASSES:
for pretrained_name in tok_case.python_cls.pretrained_vocab_files_map[tok_case.vocab_key].keys():
# Tokenizer.filter makes it possible to filter which Tokenizer to case based on all the
# information available in Tokenizer (name, rust class, python class, vocab key name)
if tok_case.filter is None or (
tok_case.filter is not None and tok_case.filter(tok_case, pretrained_name)
):
kwargs = dict(t for t in tok_case.kwargs) if tok_case.kwargs else {}
with self.subTest("{} ({})".format(tok_case.name, pretrained_name)):
tokenizer_r = tok_case.rust_cls.from_pretrained(pretrained_name, **kwargs)
tokenizer_p = tok_case.python_cls.from_pretrained(pretrained_name, **kwargs)
> self.fast_align_python(tokenizer_r, tokenizer_p, tok_case, pretrained_name)
tests/test_tokenization_fast.py:62:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_tokenization_fast.py:836: in fast_align_python
self.assert_tokenization_python_rust_equals(tokenizer_r, tokenizer_p)
tests/test_tokenization_fast.py:192: in assert_tokenization_python_rust_equals
self.assertSequenceEqual(input_p[key], input_r[key])
E AssertionError: Sequences differ: [616,[111 chars] 0, 40477, 4830, 994, 580, 566, 260, 5958, 260[5290 chars] 239] != [616,[111 chars] 0, 4830, 994, 580, 566, 260, 5958, 260, 1490,[5160 chars] 239]
E
E First differing element 29:
E 40477
E 4830
E
E First sequence contains 12 additional elements.
E First extra element 973:
E 1832
E
E Diff is 8701 characters long. Set self.maxDiff to None to see it.
----------------------------------------------------------------- Captured log call ------------------------------------------------------------------
WARNING transformers.tokenization_utils_base:tokenization_utils_base.py:2086 Token indices sequence length is longer than the specified maximum sequence length for this model (985 > 512). Running this sequence through the model will result in indexing errors
================================================================== warnings summary ==================================================================
/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15
/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/directives.py:55
/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/directives.py:55: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working
assert isinstance(locations, collections.Iterable), 'Must provide locations for directive.'
/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/directives.py:62
/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/directives.py:62: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working
assert isinstance(args, collections.Mapping), '{} args must be a dict with argument names as keys.'.format(name)
/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/typemap.py:1
/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/typemap.py:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working
from collections import OrderedDict, Sequence, defaultdict
-- Docs: https://docs.pytest.org/en/latest/warnings.html
============================================================== short test summary info ===============================================================
FAILED tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_all_tokenizers - AssertionError: Sequences differ: [616,[111 ...
=========================================================== 1 failed, 4 warnings in 5.44s ============================================================
```
## Expected behavior
not fail ;)
I started digging and it fails only for `OpenAI GPT` tokenizer and passes `GPT2` tests.
If someone could give me a pointer, I'd be happy to investigate more.
I'm not sure whether it's the python or rust implementation that is at fault.
I tested that if I force max_length=512, to remove the warning, the test still fails in the same way.
## Environment info
```
- `transformers` version: 3.0.2
- tokenizers: 0.8.1rc1
- Platform: Linux-4.15.0-109-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.5
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): 2.0.1 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5833/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5832 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5832/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5832/comments | https://api.github.com/repos/huggingface/transformers/issues/5832/events | https://github.com/huggingface/transformers/pull/5832 | 658,825,133 | MDExOlB1bGxSZXF1ZXN0NDUwNzQzNDc4 | 5,832 | [wip] T5 tokenizer should add special tokens | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for taking care of it @sshleifer !",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5832?src=pr&el=h1) Report\n> Merging [#5832](https://codecov.io/gh/huggingface/transformers/pull/5832?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/615be03f9d961c0c9722fe10e7830e011066772e&el=desc) will **decrease** coverage by `0.18%`.\n> The diff coverage is `84.61%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5832?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5832 +/- ##\n==========================================\n- Coverage 78.66% 78.48% -0.19% \n==========================================\n Files 146 146 \n Lines 26200 26213 +13 \n==========================================\n- Hits 20611 20574 -37 \n- Misses 5589 5639 +50 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5832?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5832/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `94.04% <84.61%> (-1.73%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5832/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5832/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5832/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5832?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5832?src=pr&el=footer). Last update [615be03...04f18d4](https://codecov.io/gh/huggingface/transformers/pull/5832?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5832/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5832",
"html_url": "https://github.com/huggingface/transformers/pull/5832",
"diff_url": "https://github.com/huggingface/transformers/pull/5832.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5832.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5831 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5831/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5831/comments | https://api.github.com/repos/huggingface/transformers/issues/5831/events | https://github.com/huggingface/transformers/pull/5831 | 658,778,007 | MDExOlB1bGxSZXF1ZXN0NDUwNzAwOTQx | 5,831 | minor doc fixes | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5831?src=pr&el=h1) Report\n> Merging [#5831](https://codecov.io/gh/huggingface/transformers/pull/5831?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d088d744adb4e5aa45262a34acab3ae9e81de169&el=desc) will **decrease** coverage by `1.32%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5831?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5831 +/- ##\n==========================================\n- Coverage 78.10% 76.78% -1.33% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n- Hits 20344 20000 -344 \n- Misses 5703 6047 +344 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5831?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <ø> (ø)` | |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `19.81% <0.00%> (-79.28%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `50.74% <0.00%> (-35.83%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `60.00% <0.00%> (-25.72%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `16.51% <0.00%> (-21.34%)` | :arrow_down: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `32.00% <0.00%> (-17.10%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.32% <0.00%> (-11.23%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `95.89% <0.00%> (-2.74%)` | :arrow_down: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/5831/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5831?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5831?src=pr&el=footer). Last update [d088d74...38d6390](https://codecov.io/gh/huggingface/transformers/pull/5831?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for the PR. Don't hesitate to tag me on PRs related to doc.\r\nAlso note, that I haven't had time to clean up all docstrings yet (here the sphinx syntax is not respected so there is no link to the mentioned classes). I've started with Config and will make my way down to the docs (following the index)."
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | correct superclass name and small grammar fixes
then later added a correction in the error message:
It appears to be `BaseTokenizer` from looking at:
`from tokenizers.implementations import BaseTokenizer as BaseTokenizerFast`
and not `Tokenizer` as it currently says.
p.s. Please let me know whether you prefer specific separate PRs or bundled PRs for related things. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5831/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5831",
"html_url": "https://github.com/huggingface/transformers/pull/5831",
"diff_url": "https://github.com/huggingface/transformers/pull/5831.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5831.patch",
"merged_at": 1595438554000
} |
https://api.github.com/repos/huggingface/transformers/issues/5830 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5830/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5830/comments | https://api.github.com/repos/huggingface/transformers/issues/5830/events | https://github.com/huggingface/transformers/pull/5830 | 658,731,472 | MDExOlB1bGxSZXF1ZXN0NDUwNjU5MzUy | 5,830 | [cleanups] make Marian save as Marian | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5830?src=pr&el=h1) Report\n> Merging [#5830](https://codecov.io/gh/huggingface/transformers/pull/5830?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d088d744adb4e5aa45262a34acab3ae9e81de169&el=desc) will **decrease** coverage by `0.06%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5830?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5830 +/- ##\n==========================================\n- Coverage 78.10% 78.04% -0.07% \n==========================================\n Files 146 146 \n Lines 26047 26049 +2 \n==========================================\n- Hits 20344 20329 -15 \n- Misses 5703 5720 +17 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5830?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5830/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5830/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.90% <100.00%> (+2.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5830/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `45.98% <0.00%> (-44.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5830/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5830/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.44% <0.00%> (-6.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5830/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.92% <0.00%> (-1.97%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5830/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5830/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5830/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5830?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5830?src=pr&el=footer). Last update [d088d74...41f5de7](https://codecov.io/gh/huggingface/transformers/pull/5830?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | allow `MarianTokenizer.prepare_translation_batch` to ignore kwargs like `src_lang` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5830/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5830",
"html_url": "https://github.com/huggingface/transformers/pull/5830",
"diff_url": "https://github.com/huggingface/transformers/pull/5830.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5830.patch",
"merged_at": 1594968866000
} |
https://api.github.com/repos/huggingface/transformers/issues/5829 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5829/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5829/comments | https://api.github.com/repos/huggingface/transformers/issues/5829/events | https://github.com/huggingface/transformers/issues/5829 | 658,646,953 | MDU6SXNzdWU2NTg2NDY5NTM= | 5,829 | ValueError: DataLoader with IterableDataset: expected unspecified sampler option, | {
"login": "Pradhy729",
"id": 49659913,
"node_id": "MDQ6VXNlcjQ5NjU5OTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/49659913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pradhy729",
"html_url": "https://github.com/Pradhy729",
"followers_url": "https://api.github.com/users/Pradhy729/followers",
"following_url": "https://api.github.com/users/Pradhy729/following{/other_user}",
"gists_url": "https://api.github.com/users/Pradhy729/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pradhy729/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pradhy729/subscriptions",
"organizations_url": "https://api.github.com/users/Pradhy729/orgs",
"repos_url": "https://api.github.com/users/Pradhy729/repos",
"events_url": "https://api.github.com/users/Pradhy729/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pradhy729/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the fix with #5834 :)",
"The same issue exists for _get_eval_sampler... I patched the Trainer in the mean time, but it would be good if it were fixed.",
"Hi @avercau can you open an issue with your problem? Thank you"
] | 1,594 | 1,624 | 1,596 | CONTRIBUTOR | null | # 🐛 Bug
## Information
This is in regards to the `Trainer` object. The `get_train_dataloader` function uses RandomSampler or Distributed sampler, both of which I believe won't work with an [iterable datasets](https://pytorch.org/docs/stable/data.html#iterable-style-datasets) - which are only compatible with InfiniteConstantSampler see [pytorch documentation](https://pytorch.org/docs/stable/_modules/torch/utils/data/dataloader.html#DataLoader)
https://github.com/huggingface/transformers/blob/d088d744adb4e5aa45262a34acab3ae9e81de169/src/transformers/trainer.py#L229-L242
## Steps to reproduce the behavior:
1. Create an Iterable dataset and pass it to trainer object.
Gives the following error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<timed eval> in <module>
~/.conda/envs/my_root/lib/python3.6/site-packages/transformers/trainer.py in train(self, model_path)
382 training will resume from the optimizer/scheduler states loaded here.
383 """
--> 384 train_dataloader = self.get_train_dataloader()
385 if self.args.max_steps > 0:
386 t_total = self.args.max_steps
~/.conda/envs/my_root/lib/python3.6/site-packages/transformers/trainer.py in get_train_dataloader(self)
247 sampler=train_sampler,
248 collate_fn=self.data_collator,
--> 249 drop_last=self.args.dataloader_drop_last,
250 )
251
~/.conda/envs/my_root/lib/python3.6/site-packages/torch/utils/data/dataloader.py in __init__(self, dataset, batch_size, shuffle, sampler, batch_sampler, num_workers, collate_fn, pin_memory, drop_last, timeout, worker_init_fn, multiprocessing_context)
177 raise ValueError(
178 "DataLoader with IterableDataset: expected unspecified "
--> 179 "sampler option, but got sampler={}".format(sampler))
180 elif batch_sampler is not None:
181 # See NOTE [ Custom Samplers and IterableDataset ]
ValueError: DataLoader with IterableDataset: expected unspecified sampler option, but got sampler=<torch.utils.data.sampler.RandomSampler object at 0x7fff5803b2b0>
## Expected behavior
Ideally, we should check if the dataset is an iterable dataset and if so, pass no sampler to the dataloader. Or allow the option of passing in our own sampler to the trainer.
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.4.121-92.104-default-x86_64-with-SuSE-12-x86_64
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
I can contribute and create a PR to fix this. Let me know. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5829/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5828 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5828/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5828/comments | https://api.github.com/repos/huggingface/transformers/issues/5828/events | https://github.com/huggingface/transformers/pull/5828 | 658,614,042 | MDExOlB1bGxSZXF1ZXN0NDUwNTU1MDkz | 5,828 | language tag addition on albert-mongolian | {
"login": "bayartsogt-ya",
"id": 43239645,
"node_id": "MDQ6VXNlcjQzMjM5NjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bayartsogt-ya",
"html_url": "https://github.com/bayartsogt-ya",
"followers_url": "https://api.github.com/users/bayartsogt-ya/followers",
"following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}",
"gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions",
"organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs",
"repos_url": "https://api.github.com/users/bayartsogt-ya/repos",
"events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}",
"received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5828?src=pr&el=h1) Report\n> Merging [#5828](https://codecov.io/gh/huggingface/transformers/pull/5828?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d088d744adb4e5aa45262a34acab3ae9e81de169&el=desc) will **decrease** coverage by `0.83%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5828?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5828 +/- ##\n==========================================\n- Coverage 78.10% 77.26% -0.84% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n- Hits 20344 20126 -218 \n- Misses 5703 5921 +218 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5828?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5828/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5828/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.01%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5828/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5828/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5828/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5828?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5828?src=pr&el=footer). Last update [d088d74...fa415dc](https://codecov.io/gh/huggingface/transformers/pull/5828?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | hello!
I just added the language tag to model card. Can you guys check for me? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5828/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5828/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5828",
"html_url": "https://github.com/huggingface/transformers/pull/5828",
"diff_url": "https://github.com/huggingface/transformers/pull/5828.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5828.patch",
"merged_at": 1594964438000
} |
https://api.github.com/repos/huggingface/transformers/issues/5827 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5827/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5827/comments | https://api.github.com/repos/huggingface/transformers/issues/5827/events | https://github.com/huggingface/transformers/issues/5827 | 658,538,793 | MDU6SXNzdWU2NTg1Mzg3OTM= | 5,827 | Reproducibility when using pretrained GPT2 | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 2209491906,
"node_id": "MDU6TGFiZWwyMjA5NDkxOTA2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/gpt2",
"name": "gpt2",
"color": "45cca5",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Definitely seems bad. Have you experienced this with models besides GPT2? \r\n\r\n\r\nI think there might be two issues here:\r\n1) `trainer.utils.set_seed` doesn't work completely: @julien-c \r\n\r\n2) I'm not sure whether this warning is expected behavior:\r\n```\r\nSome weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```",
"However, hashes won't be same even for a cloned state dict. Try to run this script\r\n```\r\nfrom copy import deepcopy\r\nfrom transformers import GPT2LMHeadModel\r\nimport torch\r\nfrom hashlib import blake2b\r\n\r\n\r\ndef get_hash(file):\r\n return blake2b(open(file, 'rb').read()).hexdigest()\r\n\r\n\r\nmodel1 = GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\nsd1 = model1.state_dict()\r\nsd2 = deepcopy(sd1)\r\nwith open(\"sd1\", \"wb\") as f:\r\n torch.save(sd1, f)\r\nwith open(\"sd2\", \"wb\") as f:\r\n torch.save(sd2, f)\r\nassert get_hash(\"sd1\") == get_hash(\"sd2\")\r\n```\r\nI also tried to save just individual weights from state dicts, but in this case hashes were equal. So I'm really confused, maybe \r\nit's an issue with Pytorch saving mechanism.",
"@sshleifer - I don't really that this issue is related to the Trainer.\r\n\r\nTo answer this issue, I don't think there is a bug regarding reproducibility. We have multiple tests that verify:\r\n1) That saving / loading a pretrained GPT2 model produces the exact same results => the model has the exact same weights and logic => the model results are reproducible, see:\r\nhttps://github.com/huggingface/transformers/blob/9d37c56bab8f7f1f1aa0b65be039516072254e77/tests/test_modeling_common.py#L75\r\n\r\n2) That loading a pretrained GPT2 model from the modelhub always produces the same results => ... => the model are reproducible, see: https://github.com/huggingface/transformers/blob/9d37c56bab8f7f1f1aa0b65be039516072254e77/tests/test_modeling_gpt2.py#L352\r\n\r\nThe second raised issue here are the warnings when loading the pretrained weights. There have been **a lot** of issues about this, e.g.: https://github.com/huggingface/transformers/issues/5800, https://github.com/huggingface/transformers/issues/5348, https://github.com/huggingface/transformers/issues/3553, https://github.com/huggingface/transformers/issues/5814 => These warnings happen because of two reasons:\r\n1) It concerns the embedding matrix, where as the output embeddings are tied to the input embeddings so that the weights don't matter and are therefore not saved\r\n2) It concerns these buffer weights: https://github.com/huggingface/transformers/blob/9d37c56bab8f7f1f1aa0b65be039516072254e77/src/transformers/modeling_gpt2.py#L128 , which are hardcoded values and don't need to be saved either.\r\n\r\nSince we are getting a lot of duplicated issues about this it might be worth to spend some time to disable all of them (also pinging @LysandreJik, @sshleifer, and @sgugger here in case you feel like taking a deeper look into the warnings :-)) ",
"The issue may be more related to transparency than reproducibility.\r\n\r\nIt would be good to be able to prove which model was used at the start, and to show that 2 models are the same (even more if it's done for audit purposes in a late future).",
"Comparing hashes for saved model files does not work, as for models with **same** state dict hashes would differ (at least for GPT2). So maybe it's better to load state dicts into memory and compare tensors for each key.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,594 | 1,601 | 1,601 | CONTRIBUTOR | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Ensure reproducibility when loading pretrained models.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
In order to distribute results in a transparent manner, it is important to ensure reproducibility.
When loading a pretrained model, I cannot get the same weights, probably due to non-initialized weights.
Even when using `set_seed`, I cannot get twice the same models.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I created a [small example](https://colab.research.google.com/gist/borisdayma/efa9ce5e8c7078bf12031b525f21f107/transformers-repeatability.ipynb) to illustrate the issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5827/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5826 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5826/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5826/comments | https://api.github.com/repos/huggingface/transformers/issues/5826/events | https://github.com/huggingface/transformers/issues/5826 | 658,535,946 | MDU6SXNzdWU2NTg1MzU5NDY= | 5,826 | Vocab size mismatch on EncoderDecoder model from_pretrained | {
"login": "afcruzs",
"id": 4340932,
"node_id": "MDQ6VXNlcjQzNDA5MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4340932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/afcruzs",
"html_url": "https://github.com/afcruzs",
"followers_url": "https://api.github.com/users/afcruzs/followers",
"following_url": "https://api.github.com/users/afcruzs/following{/other_user}",
"gists_url": "https://api.github.com/users/afcruzs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/afcruzs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/afcruzs/subscriptions",
"organizations_url": "https://api.github.com/users/afcruzs/orgs",
"repos_url": "https://api.github.com/users/afcruzs/repos",
"events_url": "https://api.github.com/users/afcruzs/events{/privacy}",
"received_events_url": "https://api.github.com/users/afcruzs/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @afcruzs,\r\n\r\nThese lines are the problem I think:\r\n\r\n```python \r\n# Loading saved model and its configuration\r\nencoder_config = BertConfig.from_pretrained('ok')\r\ndecoder_config = BertConfig.from_pretrained('ok')\r\nprint(encoder_config.vocab_size)\r\nprint(encoder_config.vocab_size)\r\nencoder_decoder_config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config)\r\nmodel2 = EncoderDecoderModel.from_pretrained('ok', config=encoder_decoder_config) # This throws\r\n```\r\n\r\nIf you replace these lines with\r\n\r\n```python\r\n# Loading saved model and its configuration\r\nencoder_decoder_config = EncoderDecoderConfig.from_pretrained(\"ok\")\r\nmodel2 = EncoderDecoderModel.from_pretrained('ok', config=encoder_decoder_config)\r\n``` \r\nno error should be thrown.\r\n\r\nThis line here:\r\n```python\r\nencoder_config = BertConfig.from_pretrained('ok')\r\n```\r\nsaves a EncoderDecoderConfig as a Bert Encoder config which should not be done IMO.",
"Thanks @patrickvonplaten! that is indeed much clearer. My actual use case is to load the hf pretrained module with possibly modifying the config, saving with save_pretrained, and then later loading with from_pretrained. So this is my final code:\r\n\r\n```\r\nload_dir = 'bert-base-multilingual-cased'\r\nencoder_config = BertConfig.from_pretrained(load_dir)\r\ndecoder_config = BertConfig.from_pretrained(load_dir, is_decoder=True)\r\nmodel = EncoderDecoderModel.from_encoder_decoder_pretrained(load_dir, load_dir, encoder_config=encoder_config, decoder_config=decoder_config)\r\n\r\n# Train for some time...\r\n\r\n# Save model!\r\nmodel.save_pretrained('ok')\r\n\r\n# Loading saved model and its configuration\r\nencoder_decoder_config = EncoderDecoderConfig.from_pretrained(\"ok\")\r\nmodel2 = EncoderDecoderModel.from_pretrained('ok', config=encoder_decoder_config)\r\n```\r\n\r\nI think it would be a good idea to add similar examples in the docs for clarity. Specially for `EncoderDecoderConfig.from_pretrained(\"ok\")` and `.from_pretrained(load_dir, is_decoder=True)` since as you pointed out, doing so carelessly can lead to load the decoder config as encoder. I'm happy to help with the examples if you agree with them!",
"Hey @afcruzs,\r\n\r\nI agree very much that the `EncoderDecoderModel` should have better documentation. \r\n\r\nMy plan was to release a notebook soon that explains in detail how to use the `EncoderDecoderModel` and then also to update the docs. \r\n\r\nI won't be able to start with this until 3/08 so feel free to open A PR :-) "
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): EncoderDecoder with bert-base-multilingual-cased in both
Language I am using the model on (English, Chinese ...): not relevant for the bug
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is: Not relevant for the bug
## To reproduce
Steps to reproduce the behavior:
I am trying to load a training checkpoint using the `save_pretrained` and `from_pretrained` API with the EncoderDecoder model. `EncoderDecoderModel.from_pretrained` fails to load the model when the configuration is loaded from the previously checkpointed model. I believe it's because it is loading a default vocab size (30522) instead of whatever is defined in the saved config (119547) in my case. To repro this run:
```
from transformers import EncoderDecoderModel, BertTokenizer, BertConfig, EncoderDecoderConfig
# Loading encoder-decoder model and saving it
load_dir = 'bert-base-multilingual-cased'
encoder_config = BertConfig.from_pretrained(load_dir)
decoder_config = BertConfig.from_pretrained(load_dir, is_decoder=True)
print(encoder_config.vocab_size)
print(encoder_config.vocab_size)
tokenizer = BertTokenizer.from_pretrained(load_dir)
model = EncoderDecoderModel.from_encoder_decoder_pretrained(load_dir, load_dir, encoder_config=encoder_config, decoder_config=decoder_config) # initialize Bert2Bert
model.save_pretrained('ok')
# Loading saved model and its configuration
encoder_config = BertConfig.from_pretrained('ok')
decoder_config = BertConfig.from_pretrained('ok')
print(encoder_config.vocab_size)
print(encoder_config.vocab_size)
encoder_decoder_config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config)
model2 = EncoderDecoderModel.from_pretrained('ok', config=encoder_decoder_config) # This throws
```
The exception is the following:
```
File "/home/ancruzsa/.local/lib/python3.6/site-packages/transformers/modeling_utils.py", line 781, in from_pretrained
model.__class__.__name__, "\n\t".join(error_msgs)
RuntimeError: Error(s) in loading state_dict for EncoderDecoderModel:
size mismatch for encoder.embeddings.word_embeddings.weight: copying a param with shape torch.Size([119547, 768]) from checkpoint, the shape in current model is torch.Size([30522, 768]).
size mismatch for decoder.bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([119547, 768]) from checkpoint, the shape in current model is torch.Size([30522, 768]).
size mismatch for decoder.cls.predictions.bias: copying a param with shape torch.Size([119547]) from checkpoint, the shape in current model is torch.Size([30522]).
size mismatch for decoder.cls.predictions.decoder.weight: copying a param with shape torch.Size([119547, 768]) from checkpoint, the shape in current model is torch.Size([30522, 768]).
size mismatch for decoder.cls.predictions.decoder.bias: copying a param with shape torch.Size([119547]) from checkpoint, the shape in current model is torch.Size([30522]).
```
## Expected behavior
`from_pretrained(path)` should load the model without issues and using the provided configuration.
Edit: I was expecting `from_pretrained` with a single path as argument to work as explained in [#4595 comment](https://github.com/huggingface/transformers/issues/4595#issuecomment-638077144). However, it seems like doing `EncoderDecoderModel.from_encoder_decoder_pretrained('ok', 'ok', encoder_config=encoder_config, decoder_config=decoder_config)` does not throw an exception but it gives different results in text generation compared to `EncoderDecoderModel.from_pretrained(path)`. It would be great to confirm if both are supported and load the model weights correctly.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0 / Yes with GPU
- Tensorflow version (GPU?): None
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5826/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5825 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5825/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5825/comments | https://api.github.com/repos/huggingface/transformers/issues/5825/events | https://github.com/huggingface/transformers/pull/5825 | 658,518,935 | MDExOlB1bGxSZXF1ZXN0NDUwNDcxMjU0 | 5,825 | Add inference widget examples | {
"login": "clmnt",
"id": 821155,
"node_id": "MDQ6VXNlcjgyMTE1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/821155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clmnt",
"html_url": "https://github.com/clmnt",
"followers_url": "https://api.github.com/users/clmnt/followers",
"following_url": "https://api.github.com/users/clmnt/following{/other_user}",
"gists_url": "https://api.github.com/users/clmnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clmnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clmnt/subscriptions",
"organizations_url": "https://api.github.com/users/clmnt/orgs",
"repos_url": "https://api.github.com/users/clmnt/repos",
"events_url": "https://api.github.com/users/clmnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/clmnt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5825?src=pr&el=h1) Report\n> Merging [#5825](https://codecov.io/gh/huggingface/transformers/pull/5825?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d088d744adb4e5aa45262a34acab3ae9e81de169&el=desc) will **increase** coverage by `0.39%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5825?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5825 +/- ##\n==========================================\n+ Coverage 78.10% 78.50% +0.39% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n+ Hits 20344 20448 +104 \n+ Misses 5703 5599 -104 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5825?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5825?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5825?src=pr&el=footer). Last update [d088d74...682478c](https://codecov.io/gh/huggingface/transformers/pull/5825?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,595 | 1,595 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5825/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5825",
"html_url": "https://github.com/huggingface/transformers/pull/5825",
"diff_url": "https://github.com/huggingface/transformers/pull/5825.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5825.patch",
"merged_at": 1595942041000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5824 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5824/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5824/comments | https://api.github.com/repos/huggingface/transformers/issues/5824/events | https://github.com/huggingface/transformers/pull/5824 | 658,474,398 | MDExOlB1bGxSZXF1ZXN0NDUwNDMyMDEy | 5,824 | New Community NB Add | {
"login": "lordtt13",
"id": 35500534,
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lordtt13",
"html_url": "https://github.com/lordtt13",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5824?src=pr&el=h1) Report\n> Merging [#5824](https://codecov.io/gh/huggingface/transformers/pull/5824?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/283500ff9f9041fc027a2ce54b4e1f6337c5abbb&el=desc) will **decrease** coverage by `0.43%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5824?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5824 +/- ##\n==========================================\n- Coverage 78.50% 78.07% -0.44% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n- Hits 20448 20335 -113 \n- Misses 5599 5712 +113 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5824?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5824/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5824/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-2.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5824/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5824/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5824?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5824?src=pr&el=footer). Last update [283500f...811bc81](https://codecov.io/gh/huggingface/transformers/pull/5824?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | Signed-off-by: lordtt13 <[email protected]> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5824/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5824",
"html_url": "https://github.com/huggingface/transformers/pull/5824",
"diff_url": "https://github.com/huggingface/transformers/pull/5824.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5824.patch",
"merged_at": 1595924712000
} |
https://api.github.com/repos/huggingface/transformers/issues/5823 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5823/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5823/comments | https://api.github.com/repos/huggingface/transformers/issues/5823/events | https://github.com/huggingface/transformers/pull/5823 | 658,473,799 | MDExOlB1bGxSZXF1ZXN0NDUwNDMxNDk1 | 5,823 | Add model card for dv-wave | {
"login": "mapmeld",
"id": 643918,
"node_id": "MDQ6VXNlcjY0MzkxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mapmeld",
"html_url": "https://github.com/mapmeld",
"followers_url": "https://api.github.com/users/mapmeld/followers",
"following_url": "https://api.github.com/users/mapmeld/following{/other_user}",
"gists_url": "https://api.github.com/users/mapmeld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mapmeld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mapmeld/subscriptions",
"organizations_url": "https://api.github.com/users/mapmeld/orgs",
"repos_url": "https://api.github.com/users/mapmeld/repos",
"events_url": "https://api.github.com/users/mapmeld/events{/privacy}",
"received_events_url": "https://api.github.com/users/mapmeld/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5823?src=pr&el=h1) Report\n> Merging [#5823](https://codecov.io/gh/huggingface/transformers/pull/5823?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c45d7a707d56096b7ed48deaa3ba186fd7c306d4&el=desc) will **increase** coverage by `0.34%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5823?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5823 +/- ##\n==========================================\n+ Coverage 78.16% 78.50% +0.34% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n+ Hits 20359 20448 +89 \n+ Misses 5688 5599 -89 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5823?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5823/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5823/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5823/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5823/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+2.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5823/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5823?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5823?src=pr&el=footer). Last update [c45d7a7...46073af](https://codecov.io/gh/huggingface/transformers/pull/5823?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great! Link to Dhivehi for those who'd like to learn more about this language: https://en.wikipedia.org/wiki/Maldivian_language"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | Dhivehi / Maldives language model - includes links to notebooks for training and comparing performance on a given news classification task | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5823/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5823",
"html_url": "https://github.com/huggingface/transformers/pull/5823",
"diff_url": "https://github.com/huggingface/transformers/pull/5823.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5823.patch",
"merged_at": 1594926831000
} |
https://api.github.com/repos/huggingface/transformers/issues/5822 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5822/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5822/comments | https://api.github.com/repos/huggingface/transformers/issues/5822/events | https://github.com/huggingface/transformers/pull/5822 | 658,440,267 | MDExOlB1bGxSZXF1ZXN0NDUwNDAyMjEz | 5,822 | [seq2seq] test that finetune takes < 7 seconds | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,594 | 1,595 | 1,595 | CONTRIBUTOR | null | Could also check memory and loss.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5822/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5822",
"html_url": "https://github.com/huggingface/transformers/pull/5822",
"diff_url": "https://github.com/huggingface/transformers/pull/5822.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5822.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5821 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5821/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5821/comments | https://api.github.com/repos/huggingface/transformers/issues/5821/events | https://github.com/huggingface/transformers/pull/5821 | 658,429,038 | MDExOlB1bGxSZXF1ZXN0NDUwMzkyNjA4 | 5,821 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5821?src=pr&el=h1) Report\n> Merging [#5821](https://codecov.io/gh/huggingface/transformers/pull/5821?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c45d7a707d56096b7ed48deaa3ba186fd7c306d4&el=desc) will **decrease** coverage by `0.85%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5821?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5821 +/- ##\n==========================================\n- Coverage 78.16% 77.30% -0.86% \n==========================================\n Files 146 146 \n Lines 26047 26047 \n==========================================\n- Hits 20359 20135 -224 \n- Misses 5688 5912 +224 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5821?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5821/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5821?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5821?src=pr&el=footer). Last update [c45d7a7...397ba6e](https://codecov.io/gh/huggingface/transformers/pull/5821?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,594 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5821/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5821",
"html_url": "https://github.com/huggingface/transformers/pull/5821",
"diff_url": "https://github.com/huggingface/transformers/pull/5821.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5821.patch",
"merged_at": 1594927112000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.