url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/6921
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6921/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6921/comments
https://api.github.com/repos/huggingface/transformers/issues/6921/events
https://github.com/huggingface/transformers/pull/6921
691,950,542
MDExOlB1bGxSZXF1ZXN0NDc4NjE1MjM0
6,921
[model_cards] Fixed some typing mistakes in usage sections in model cards.
{ "login": "abdullaholuk-loodos", "id": 70137509, "node_id": "MDQ6VXNlcjcwMTM3NTA5", "avatar_url": "https://avatars.githubusercontent.com/u/70137509?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abdullaholuk-loodos", "html_url": "https://github.com/abdullaholuk-loodos", "followers_url": "https://api.github.com/users/abdullaholuk-loodos/followers", "following_url": "https://api.github.com/users/abdullaholuk-loodos/following{/other_user}", "gists_url": "https://api.github.com/users/abdullaholuk-loodos/gists{/gist_id}", "starred_url": "https://api.github.com/users/abdullaholuk-loodos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abdullaholuk-loodos/subscriptions", "organizations_url": "https://api.github.com/users/abdullaholuk-loodos/orgs", "repos_url": "https://api.github.com/users/abdullaholuk-loodos/repos", "events_url": "https://api.github.com/users/abdullaholuk-loodos/events{/privacy}", "received_events_url": "https://api.github.com/users/abdullaholuk-loodos/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,599
1,599
1,599
CONTRIBUTOR
null
Loodos model cards had errors on "Usage" section. It is fixed. Also "electra-base-turkish-uncased" model removed from s3 and re-uploaded as "electra-base-turkish-uncased-discriminator". Its README added.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6921/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6921", "html_url": "https://github.com/huggingface/transformers/pull/6921", "diff_url": "https://github.com/huggingface/transformers/pull/6921.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6921.patch", "merged_at": 1599138824000 }
https://api.github.com/repos/huggingface/transformers/issues/6920
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6920/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6920/comments
https://api.github.com/repos/huggingface/transformers/issues/6920/events
https://github.com/huggingface/transformers/issues/6920
691,833,574
MDU6SXNzdWU2OTE4MzM1NzQ=
6,920
(ONNX) Error while converting the model: bad allocation
{ "login": "shinishiho", "id": 59284549, "node_id": "MDQ6VXNlcjU5Mjg0NTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/59284549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shinishiho", "html_url": "https://github.com/shinishiho", "followers_url": "https://api.github.com/users/shinishiho/followers", "following_url": "https://api.github.com/users/shinishiho/following{/other_user}", "gists_url": "https://api.github.com/users/shinishiho/gists{/gist_id}", "starred_url": "https://api.github.com/users/shinishiho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shinishiho/subscriptions", "organizations_url": "https://api.github.com/users/shinishiho/orgs", "repos_url": "https://api.github.com/users/shinishiho/repos", "events_url": "https://api.github.com/users/shinishiho/events{/privacy}", "received_events_url": "https://api.github.com/users/shinishiho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Ok, I know it was my fault. I didn't add the argument `--use-external-format` (gpt2-xl is more than 2GB)\r\nActually I had to open the convert_graph_to_onnx.py file and read each argument's description\r\nThanks again, I'm closing the issue now." ]
1,599
1,599
1,599
NONE
null
I was trying to convert gpt2-xl model to onnx model using convert_graph_to_onnx.py. It ran for a while and stopped with some errors: `TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!` ` w = w / (float(v.size(-1)) ** 0.5)` (in modeling_gpt2.py:151) `TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!` `mask = self.bias[:, :, ns - nd : ns, :ns]` (in modeling_gpt2.py:151) And the last one: `Error while converting the model: bad allocation` I googled about this problem but there was no effective solution (for me) at all. Please help me, thank you in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6920/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6919
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6919/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6919/comments
https://api.github.com/repos/huggingface/transformers/issues/6919/events
https://github.com/huggingface/transformers/pull/6919
691,607,899
MDExOlB1bGxSZXF1ZXN0NDc4MzI4MTgy
6,919
tweak tar command in readme
{ "login": "brettkoonce", "id": 11281814, "node_id": "MDQ6VXNlcjExMjgxODE0", "avatar_url": "https://avatars.githubusercontent.com/u/11281814?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brettkoonce", "html_url": "https://github.com/brettkoonce", "followers_url": "https://api.github.com/users/brettkoonce/followers", "following_url": "https://api.github.com/users/brettkoonce/following{/other_user}", "gists_url": "https://api.github.com/users/brettkoonce/gists{/gist_id}", "starred_url": "https://api.github.com/users/brettkoonce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brettkoonce/subscriptions", "organizations_url": "https://api.github.com/users/brettkoonce/orgs", "repos_url": "https://api.github.com/users/brettkoonce/repos", "events_url": "https://api.github.com/users/brettkoonce/events{/privacy}", "received_events_url": "https://api.github.com/users/brettkoonce/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=h1) Report\n> Merging [#6919](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dfa10a41ba3fd9c5289bebd3baeff8792b1b2281?el=desc) will **decrease** coverage by `0.20%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6919/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6919 +/- ##\n==========================================\n- Coverage 80.02% 79.82% -0.21% \n==========================================\n Files 157 157 \n Lines 28586 28586 \n==========================================\n- Hits 22876 22818 -58 \n- Misses 5710 5768 +58 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `36.50% <0.00%> (-60.32%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.40% <0.00%> (+0.34%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=footer). Last update [dfa10a4...252c784](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6919/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6919", "html_url": "https://github.com/huggingface/transformers/pull/6919", "diff_url": "https://github.com/huggingface/transformers/pull/6919.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6919.patch", "merged_at": 1599139742000 }
https://api.github.com/repos/huggingface/transformers/issues/6918
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6918/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6918/comments
https://api.github.com/repos/huggingface/transformers/issues/6918/events
https://github.com/huggingface/transformers/issues/6918
691,584,218
MDU6SXNzdWU2OTE1ODQyMTg=
6,918
RuntimeError: Internal: /sentencepiece/src/sentencepiece_processor.cc(818) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
{ "login": "magic20191", "id": 50441790, "node_id": "MDQ6VXNlcjUwNDQxNzkw", "avatar_url": "https://avatars.githubusercontent.com/u/50441790?v=4", "gravatar_id": "", "url": "https://api.github.com/users/magic20191", "html_url": "https://github.com/magic20191", "followers_url": "https://api.github.com/users/magic20191/followers", "following_url": "https://api.github.com/users/magic20191/following{/other_user}", "gists_url": "https://api.github.com/users/magic20191/gists{/gist_id}", "starred_url": "https://api.github.com/users/magic20191/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/magic20191/subscriptions", "organizations_url": "https://api.github.com/users/magic20191/orgs", "repos_url": "https://api.github.com/users/magic20191/repos", "events_url": "https://api.github.com/users/magic20191/events{/privacy}", "received_events_url": "https://api.github.com/users/magic20191/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "The `AlbertTokenizer` in `transformers` is a SentencePiece based tokenizer, so it cannot load `vocab.txt`. You could try loading it in `BertTokenizer`, as it seems to be a wordpiece tokenizer vocabulary.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,605
1,605
NONE
null
# ❓ When I used transformers based on jupyter, it can't get vocab,but the function AlbertModel.from_pretrained is available **CODE:** from transformers import AlbertTokenizer, AlbertModel import torch tokenizer = AlbertTokenizer.from_pretrained("./albert-v1/vocab.txt") **The following error occurs:** Calling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated ------------------------------------------------------------------ RuntimeError Traceback (most recent call last) <ipython-input-10-bf78623a6e4a> in <module> ----> 1 tokenizer = AlbertTokenizer.from_pretrained("./albert-v1/vocab.txt") ~/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, *inputs, **kwargs) 1138 1139 """ -> 1140 return cls._from_pretrained(*inputs, **kwargs) 1141 1142 @classmethod ~/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1285 # Instantiate tokenizer. 1286 try: -> 1287 tokenizer = cls(*init_inputs, **init_kwargs) 1288 except OSError: 1289 raise OSError( ~/anaconda3/lib/python3.7/site-packages/transformers/tokenization_albert.py in __init__(self, vocab_file, do_lower_case, remove_space, keep_accents, bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, **kwargs) 153 154 self.sp_model = spm.SentencePieceProcessor() --> 155 self.sp_model.Load(vocab_file) 156 157 @property ~/anaconda3/lib/python3.7/site-packages/sentencepiece.py in Load(self, model_file, model_proto) 365 if model_proto: 366 return self.LoadFromSerializedProto(model_proto) --> 367 return self.LoadFromFile(model_file) 368 369 ~/anaconda3/lib/python3.7/site-packages/sentencepiece.py in LoadFromFile(self, arg) 175 176 def LoadFromFile(self, arg): --> 177 return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) 178 179 def Init(self, RuntimeError: Internal: /sentencepiece/src/sentencepiece_processor.cc(818) [model_proto->ParseFromArray(serialized.data(), serialized.size())] ![image](https://user-images.githubusercontent.com/50441790/92065758-cf984780-edd2-11ea-8774-6a9dd5f27b16.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6918/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6918/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6917
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6917/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6917/comments
https://api.github.com/repos/huggingface/transformers/issues/6917/events
https://github.com/huggingface/transformers/issues/6917
691,563,796
MDU6SXNzdWU2OTE1NjM3OTY=
6,917
T5 Tokenizer fails to decode correctly and prints ⁇
{ "login": "misrasaurabh1", "id": 1271289, "node_id": "MDQ6VXNlcjEyNzEyODk=", "avatar_url": "https://avatars.githubusercontent.com/u/1271289?v=4", "gravatar_id": "", "url": "https://api.github.com/users/misrasaurabh1", "html_url": "https://github.com/misrasaurabh1", "followers_url": "https://api.github.com/users/misrasaurabh1/followers", "following_url": "https://api.github.com/users/misrasaurabh1/following{/other_user}", "gists_url": "https://api.github.com/users/misrasaurabh1/gists{/gist_id}", "starred_url": "https://api.github.com/users/misrasaurabh1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/misrasaurabh1/subscriptions", "organizations_url": "https://api.github.com/users/misrasaurabh1/orgs", "repos_url": "https://api.github.com/users/misrasaurabh1/repos", "events_url": "https://api.github.com/users/misrasaurabh1/events{/privacy}", "received_events_url": "https://api.github.com/users/misrasaurabh1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }, { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "@mfuntowicz - Since T5 relies on google's sentencepiece tokenizer for now, can we do anything against it before our own sentencepiece tokenizer is implemented? ", "Verified that this is a problem with the original T5 sentencepience tokenizer. Opened an issue with the Google's T5 repository. https://github.com/google-research/text-to-text-transfer-transformer/issues/390", "Closing this issue , quoting from T5 github issue\r\n> > { is OOV because we intentionally removed any pages with { or } from C4 to avoid pre-training on anything other than natural language. So, it gets encoded to ??. SentencePiece has a byte fallback feature but it was not available when we trained our sentencepiece model." ]
1,599
1,599
1,599
CONTRIBUTOR
null
T5 Tokenizer tokenizes this following sequence to ``` >>> from transformers import T5Tokenizer >>> tokenizer = T5Tokenizer.from_pretrained("t5-base") >>> print(tokenizer.tokenize("My phone number is 1-${phone.number}")) ['▁My', '▁phone', '▁number', '▁is', '▁1-', '$', '{', 'phone', '.', 'num', 'ber', '}'] ``` So far so good but when we decode the above sequence back, we get weird ⁇ symbols. ``` >>> print(tokenizer.decode(tokenizer.encode("My phone number is 1-${phone.number}"))) My phone number is 1-$ ⁇ phone.number ⁇ ``` This along with the bug https://github.com/huggingface/transformers/issues/6150 shows that T5 Tokenizer - Is not cycle consistent - Ignores multiple whitespaces ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-4.9.0-12-amd64-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> T5: @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Shown above <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> A cycle consistent T5 Tokenizer that works on a variety of inputs
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6917/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6916
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6916/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6916/comments
https://api.github.com/repos/huggingface/transformers/issues/6916/events
https://github.com/huggingface/transformers/issues/6916
691,450,286
MDU6SXNzdWU2OTE0NTAyODY=
6,916
[model weights caching] model upload doesn't check model weights hash
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I can confirm it was previously checking the model weights and re-downloading if the weights had been changed. Investigating.", "This is due to the CDN caching files, with a 24 hour delay. After 24 hours it should download your file, but if you want it now you can use the `use_cdn` flag and set it to `False`. You can see the documentation for this [here](https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L573-L585).", "Thank you for the hint, @LysandreJik. So `from_pretrained(mname, use_cdn=False)`\r\n\r\nBut that might be tricky for end users who won't know that the code base has changed yet the model weights they get are out sync.\r\n\r\nIs there a way to signal CDN to invalidate the cache for some files? It could then be done from the upload util.\r\n\r\n\r\n\r\n", "FWIW, I wrote a one liner to force cache update for the 4 models I'm working at the moment.\r\n```\r\nPYTHONPATH=\"src\" python -c 'from transformers import AutoModel; [AutoModel.from_pretrained(\"stas/fsmt-wmt19-\"+p, use_cdn=False) for p in [\"en-ru\",\"ru-en\",\"en-de\",\"de-en\"]]'\r\n```\r\nI now have that in my script, so I don't need to think about it.", "@LysandreJik, unfortunately this doesn't solve the issue\r\n\r\n`AutoModel.from_pretrained(mname, use_cdn=False)`\r\n\r\nIndeed forces a download of the recently updated model - but then if this flag is no longer used in the application - it still downloads the CDN cached version and ends up using the wrong version.\r\n\r\nSo, basically, this results in 2 copies (different hashes) sitting in the cache dir. \r\n\r\nAnd normal usage w/o using `use_cdn=False` looks up the old version and not the new one. (so things like `run_eval.py` still use the old one)\r\n\r\nThanks.\r\n", "can you run `AutoModel.from_pretrained(mname, use_cdn=False)` in a debugger and check whether the downloaded url is a `https://cdn.huggingface.co` or a `https://s3.amazonaws.com/models.huggingface.co` url?", "I can do that, but I already checked that it downloads the updated model w/ `use_cdn=False`. But then if you run it again w/o `use_cdn=False` it ignores the new download and uses the old model again (if I delete the cached version, it redownloads the old cached version w/o `use_cdn=False` ).", "Oh yeah ok, I see. Can you `run_eval.py` on a local folder path then?", "> Can you `run_eval.py` on a local folder path then?\r\n\r\nYes. Except others can't as they don't have my local copy.\r\n\r\ne.g. @sshleifer wants to eval my PR https://github.com/huggingface/transformers/pull/6940, but now has to wait till tomorrow for CDN to expire (or hack around it).\r\n\r\nLast night I uploaded an experimental model, which proved to be invalid, thought I re-downloaded it OK as it was working OK and made a PR, except I was testing against the non-current cached version, which was a good one.", "Can we please re-open this ticket? It hasn't been resolved", "Can we add a `--no_cdn` boolean flag to `run_eval.py` that would then call `AutoModelForSeq2SeqLM.from_pretrained(use_cdn=False)`?\r\n\r\nIn our dev workflow we mostly don't use the cdn while the files are still in-flux. Cloudfront invalidation comes with its own set of issues so it's better to view cdn as a means to distribute permanent files. (for this reason we don't serve config.json files from Cloudfront)", "> Can we add a `--no_cdn` boolean flag to `run_eval.py` that would then call `AutoModelForSeq2SeqLM.from_pretrained(use_cdn=False)`?\r\n\r\nIt could be done. I have a feeling then there will be others.\r\n\r\nPerhaps an alternative solution would be to introduce an env var, that would transparently override cdn cache in any situation w/o needing to change every script? `TRANSFORMERS_USE_CDN=False`?\r\n\r\n> In our dev workflow we mostly don't use the cdn while the files are still in-flux. Cloudfront invalidation comes with its own set of issues so it's better to view cdn as a means to distribute permanent files. (for this reason we don't serve config.json files from Cloudfront)\r\n\r\nUnderstood!\r\n\r\nHow do you let others onto testing the model files? Putting them on dropbox or something and sharing the link?\r\n", "No, just S3 links!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "https://github.com/huggingface/transformers/pull/8324 should resolve this." ]
1,599
1,604
1,604
CONTRIBUTOR
null
I have re-uploaded model weights via `transformers-cli upload` and noticed that when I tried to use it - it didn't get re-downloaded, and instead continued to use the cached version. The problem seems to come from the fact that the other uploaded files haven't changed, only the model weights. I double checked that the md5sum of the old weights file is different from the new one. I re-uploaded the whole folder using: ``` transformers-cli upload fsmt-wmt19-en-de ``` If I hunt down the cached files (not an easy task), and delete those, it does re-download the new version. If I diff the cached weights file and the updated cache file, which gets re-downloaded if I move away the original cached file, they aren't the same.: ``` Binary files before/d97352d9f1f96ee4c6055f203812035b4597258a837db1f4f0803a2932cc3071.53ce64c7097bfcd85418af04a21b4a897c78c8440de3af078e577727ad9de3a0 and after/d97352d9f1f96ee4c6055f203812035b4597258a837db1f4f0803a2932cc3071.53ce64c7097bfcd85418af04a21b4a897c78c8440de3af078e577727ad9de3a0 differ ``` Could we please include the model weights file in the hash calculation? Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6916/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6915
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6915/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6915/comments
https://api.github.com/repos/huggingface/transformers/issues/6915/events
https://github.com/huggingface/transformers/pull/6915
691,413,647
MDExOlB1bGxSZXF1ZXN0NDc4MTYyMzkx
6,915
Fix mixed precision issue in TF DistilBert
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=h1) Report\n> Merging [#6915](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `2.01%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6915/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6915 +/- ##\n==========================================\n+ Coverage 77.81% 79.83% +2.01% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22452 23034 +582 \n+ Misses 6401 5819 -582 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.82% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=footer). Last update [4ebb52a...481baa3](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I run a test with this change on my ubuntu 18.04 with a 2080Ti GPU, tensorflow-gpu 2.2.0:\r\n```\r\nfrom tensorflow.keras.layers import Input, Embedding, Bidirectional, GRU, Dense\r\nfrom tensorflow.keras.models import Model\r\nfrom transformers import TFDistilBertModel\r\nfrom tensorflow.keras.mixed_precision import experimental as mixed_precision\r\npolicy = mixed_precision.Policy('mixed_float16')\r\nmixed_precision.set_policy(policy)\r\n\r\nbert = TFDistilBertModel.from_pretrained('distilbert-base-uncased')\r\ninputs = Input(shape=(None,), dtype='int32')\r\nbert_out = bert(inputs)[0]\r\noutput = Dense(9, activation='softmax', dtype='float32')(bert_out)\r\nmodel = Model(inputs, output)\r\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\r\nmodel.summary()\r\nx = [[5, 2, 3] * 3] * 100\r\ny = [[1, 2, 3] * 3] * 100\r\nmodel.fit(x=x, y=y, epochs=20, batch_size=16)\r\n```\r\nAnd get error info:\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 8, in <module>\r\n bert = TFDistilBertModel.from_pretrained('distilbert-base-uncased')\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_utils.py\", line 602, in from_pretrained\r\n model(model.dummy_inputs, training=False) # build the network with dummy inputs\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 615, in call\r\n outputs = self.distilbert(inputs, **kwargs)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 508, in call\r\n tfmr_output = self.transformer(\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 401, in call\r\n layer_outputs = layer_module(hidden_state, attn_mask, head_mask[i], output_attentions, training=training)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 355, in call\r\n ffn_output = self.ffn(sa_output, training=training) # (bs, seq_length, dim)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 304, in call\r\n x = self.activation(x)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py\", line 968, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/layers/core.py\", line 420, in call\r\n return self.activation(inputs)\r\n File \"/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py\", line 79, in gelu\r\n cdf = 0.5 * (1.0 + tf.math.erf(x / tf.math.sqrt(2.0)))\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py\", line 984, in binary_op_wrapper\r\n return func(x, y, name=name)\r\n File \"/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py\", line 1081, in _truediv_python3\r\n raise TypeError(\"x and y must have the same dtype, got %r != %r\" %\r\nTypeError: x and y must have the same dtype, got tf.float16 != tf.float32\r\n```\r\nI made a modification to L299:\r\n`self.activation = (\r\n tf.keras.layers.Activation(gelu, dtype='float32') if config.activation == \"gelu\" else tf.keras.activations.relu\r\n )`\r\nAnd then the model began to train, however the loss don't decrease and the accuracy is always 0:\r\n```\r\n7/7 [==============================] - 0s 28ms/step - loss: 2.1972 - accuracy: 0.0000e+00\r\nEpoch 2/20\r\n7/7 [==============================] - 0s 29ms/step - loss: 2.1972 - accuracy: 0.0000e+00\r\nEpoch 3/20\r\n7/7 [==============================] - 0s 30ms/step - loss: 2.1972 - accuracy: 0.0000e+00\r\nEpoch 4/20\r\n7/7 [==============================] - 0s 31ms/step - loss: 2.1972 - accuracy: 0.0000e+00\r\n```\r\n\r\nI have trid this code in float32 precision, and it works. \r\n```\r\nEpoch 1/20\r\n7/7 [==============================] - 0s 31ms/step - loss: 2.5418 - accuracy: 0.2800\r\nEpoch 2/20\r\n7/7 [==============================] - 0s 33ms/step - loss: 1.2452 - accuracy: 0.3356\r\nEpoch 3/20\r\n7/7 [==============================] - 0s 31ms/step - loss: 1.1438 - accuracy: 0.3267\r\nEpoch 4/20\r\n7/7 [==============================] - 0s 33ms/step - loss: 1.1219 - accuracy: 0.3400\r\n```", "@xuxingya , the accuracy not improved during training is due to a line \r\n\r\n > scores = scores - 1e30 * (1.0 - mask)\r\n\r\nwhile `1e30` with `half precision` will cause `nan` values. I am still trying to figure out a way to deal with it.", "@xuxingya Would you mind to run the test on your side again, please? I tested it with your example, and it is fine now.", "@chiapas Yes, I run the test and now it's fine." ]
1,599
1,651
1,599
COLLABORATOR
null
Fix mixed precision issue in TF DistilBert by removing hard-coded uses of float32. <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #6858
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6915/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6915", "html_url": "https://github.com/huggingface/transformers/pull/6915", "diff_url": "https://github.com/huggingface/transformers/pull/6915.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6915.patch", "merged_at": 1599222597000 }
https://api.github.com/repos/huggingface/transformers/issues/6914
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6914/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6914/comments
https://api.github.com/repos/huggingface/transformers/issues/6914/events
https://github.com/huggingface/transformers/pull/6914
691,377,847
MDExOlB1bGxSZXF1ZXN0NDc4MTMxOTAw
6,914
Template updates
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=h1) Report\n> Merging [#6914](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `1.21%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6914/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6914 +/- ##\n==========================================\n+ Coverage 77.81% 79.03% +1.21% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22452 22804 +352 \n+ Misses 6401 6049 -352 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `30.15% <0.00%> (-65.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.73% <0.00%> (-19.35%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <0.00%> (+0.83%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.64% <0.00%> (+1.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.84% <0.00%> (+1.61%)` | :arrow_up: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `85.18% <0.00%> (+2.46%)` | :arrow_up: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=footer). Last update [4ebb52a...408286d](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
COLLABORATOR
null
When adding the Funnel Transformer, I noticed a few things wrong in the template. This PR fixes those. - using `transformers.testing_utils` instead of `.utils` - remove xxx from names in tests (as @patrickvonplaten has done recently on Bert) - add multiple choice model test - fix label names in masked lm model - remove the mention to add to pipelines.py in the checklist since there is nothing to do there
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6914/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6914", "html_url": "https://github.com/huggingface/transformers/pull/6914", "diff_url": "https://github.com/huggingface/transformers/pull/6914.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6914.patch", "merged_at": 1599120899000 }
https://api.github.com/repos/huggingface/transformers/issues/6913
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6913/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6913/comments
https://api.github.com/repos/huggingface/transformers/issues/6913/events
https://github.com/huggingface/transformers/issues/6913
691,373,137
MDU6SXNzdWU2OTEzNzMxMzc=
6,913
Small bug on website
{ "login": "tanmaypandey7", "id": 36691630, "node_id": "MDQ6VXNlcjM2NjkxNjMw", "avatar_url": "https://avatars.githubusercontent.com/u/36691630?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanmaypandey7", "html_url": "https://github.com/tanmaypandey7", "followers_url": "https://api.github.com/users/tanmaypandey7/followers", "following_url": "https://api.github.com/users/tanmaypandey7/following{/other_user}", "gists_url": "https://api.github.com/users/tanmaypandey7/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanmaypandey7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanmaypandey7/subscriptions", "organizations_url": "https://api.github.com/users/tanmaypandey7/orgs", "repos_url": "https://api.github.com/users/tanmaypandey7/repos", "events_url": "https://api.github.com/users/tanmaypandey7/events{/privacy}", "received_events_url": "https://api.github.com/users/tanmaypandey7/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! Yes, this isn't an issue, this is the intended behavior. It's the standard behavior with Sphinx/ReadTheDocs. You can see a similar example with the [PyTorch docs](https://pytorch.org/docs/stable/tensors.html)." ]
1,599
1,599
1,599
NONE
null
Hi, I am not sure if this is the correct place to report this but all the web pages of https://huggingface.co/transformers/master/index.html are having some issues with scrolling. Scrolling the main text (right side) also scrolls the table of contents(left part)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6913/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6913/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6912
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6912/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6912/comments
https://api.github.com/repos/huggingface/transformers/issues/6912/events
https://github.com/huggingface/transformers/issues/6912
691,363,867
MDU6SXNzdWU2OTEzNjM4Njc=
6,912
batch_encode_plus does not lead to the same predictions as encode_plus
{ "login": "yhifny", "id": 11491724, "node_id": "MDQ6VXNlcjExNDkxNzI0", "avatar_url": "https://avatars.githubusercontent.com/u/11491724?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yhifny", "html_url": "https://github.com/yhifny", "followers_url": "https://api.github.com/users/yhifny/followers", "following_url": "https://api.github.com/users/yhifny/following{/other_user}", "gists_url": "https://api.github.com/users/yhifny/gists{/gist_id}", "starred_url": "https://api.github.com/users/yhifny/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yhifny/subscriptions", "organizations_url": "https://api.github.com/users/yhifny/orgs", "repos_url": "https://api.github.com/users/yhifny/repos", "events_url": "https://api.github.com/users/yhifny/events{/privacy}", "received_events_url": "https://api.github.com/users/yhifny/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi, are you sure your issue comes from the tokenizer? If you encode your text using `encode_plus` and `batch_encode_plus`, do you see a difference in the tokens generated?", "I only use encode_plus and batch_encode_plus and call model inference. I do not think the model inference is the problem as you see in the function calls. so I think it is coming from encode_plus and batch_encode_plus. Regarding your question, I see that that batch_encode_plus add ones at the end of the list \" 1, 1, 1, 1, 1, 1]\". and I thought this is this difference may be a reason for the problem.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,605
1,605
NONE
null
I use batch_encode_plus to speed up the predictions but it leads to different results compared to "encode_plus" ``` tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2") inputs = tokenizer.encode_plus(QA_input['question'], QA_input['context'], padding = True, add_special_tokens=True, ``` and ``` tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2") questions = [(q['question'],q['context']) for q in q_dict] inputs = tokenizer.batch_encode_plus(questions, padding=True, add_special_tokens=True, return_tensors="pt") ``` Most of the time in the predictions are the same but sometimes they are different: ``` def predict_batch_using_model(model, model_name, q_dict): tokenizer = AutoTokenizer.from_pretrained(model_name) questions = [(q['question'],q['context']) for q in q_dict] inputs = tokenizer.batch_encode_plus(questions, padding=True, add_special_tokens=True, return_tensors="pt") logger.debug('inputs batch_encode_plus: %s\n',inputs) if torch.cuda.is_available(): inputs.to('cuda') answer_start_scores, answer_end_scores = model(**inputs) # a list of (answer, probs_start) answer_probs_start_batch = [] for i in range(len(q_dict)): input_ids = inputs["input_ids"].tolist()[i] text_tokens = tokenizer.convert_ids_to_tokens(input_ids) answer_start = torch.argmax( answer_start_scores[i] ) # Get the most likely beginning of answer with the argmax of the score answer_end = torch.argmax(answer_end_scores[i]) + 1 # Get the most likely end of answer with the argmax of the score logger.debug('answer_start, answer_end: %d %d %d\n',i, answer_start, answer_end) answer = tokenizer.convert_tokens_to_string(text_tokens[answer_start:answer_end]) total_scores = answer_start_scores[i].add_(answer_end_scores[i]) # in place addition total_scores = total_scores.cpu().data.numpy() probs = _compute_softmax(total_scores) answer_probs_start_batch.append( ( answer, probs[answer_start])) return answer_probs_start_batch def predict_using_model(model, model_name, QA_input): tokenizer = AutoTokenizer.from_pretrained(model_name) inputs = tokenizer.encode_plus(QA_input['question'], QA_input['context'], padding = True, add_special_tokens=True, return_tensors="pt") logger.debug('inputs encode_plus: %s\n',inputs) if torch.cuda.is_available(): inputs.to('cuda') input_ids = inputs["input_ids"].tolist()[0] text_tokens = tokenizer.convert_ids_to_tokens(input_ids) answer_start_scores, answer_end_scores = model(**inputs) answer_start = torch.argmax( answer_start_scores ) # Get the most likely beginning of answer with the argmax of the score answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score logger.debug('answer_start, answer_end: %d %d %d\n',0, answer_start, answer_end) answer = tokenizer.convert_tokens_to_string(text_tokens[answer_start:answer_end]) total_scores = answer_start_scores.add_(answer_end_scores) # in place addition total_scores = total_scores.cpu().data.numpy() probs = _compute_softmax(total_scores) return answer, probs[answer_start] ``` the input dictionary ` {"q": "what is color of thomas train", "gt_answer": "blue", "results": [{"system": "Google KG", "response": [], "latency": 350}, {"system": "Google CSE", "response": [{"source": "www.strasburgrailroad.com", "title": "15 Fun Facts About Thomas the Tank Engine - Strasburg Rail Road", "snippet": "Aug 15, 2017 ... Thomas' iconic blue color is also the official color of the North Western Railway. \nBefore Thomas was blue he was originally teal green with\u00a0..."}, {"source": "www.youtube.com", "title": "Learn Colors with My First Railways | Playing Around with Thomas ...", "snippet": "Oct 21, 2017 ... About Thomas & Friends: Based on a series of children's books, \"Thomas & \nFriends\" features Thomas the Tank Engine adventures with other\u00a0..."}, {"source": "en.wikipedia.org", "title": "Thomas the Tank Engine - Wikipedia", "snippet": "Thomas the Tank Engine is an anthropomorphised fictional steam locomotive in \nThe Railway ... In The Adventure Begins which is a retelling of Thomas's early \ndays on Sodor, he is a bluish-green colour when he first arrives on Sodor, his \ntanks\u00a0..."}, {"source": "play.thomasandfriends.com", "title": "Meet the Thomas & Friends Engines | Thomas & Friends", "snippet": "Discover all the engines from Sodor! Thomas & Friends fans can learn about all \ntheir favorite characters from the Thomas & Friends books, TV series and\u00a0..."}, {"source": "www.theguardian.com", "title": "Thomas the Tank Engine had to shut the hell up to save children ...", "snippet": "Jul 22, 2014 ... Thomas the Tank Engine had to shut the hell up to save children everywhere. \nThis article is more than 6 years old. Tracy Van Slyke. Classism\u00a0..."}, {"source": "www.amazon.com", "title": "RoomMates RMK1035SCS Thomas & Friends Peel ... - Amazon.com", "snippet": "RoomMates RMK1035SCS Thomas & Friends Peel and Stick Wall Decals ,Multi \ncolor. +. RoomMates RMK1831SCS Thomas The Tank Engine Peel and Stick\u00a0..."}, {"source": "ttte.fandom.com", "title": "Nia | Thomas the Tank Engine Wikia | Fandom", "snippet": "Nia is a Kenyan tank engine who befriended and accompanied Thomas on his \njourney ... Noticing how heavy his train was getting, she offered to help, but \nThomas ... of the Steam Team to have a snowplough that is not the same colour \nas her."}, {"source": "www.amazon.com", "title": "Thomas The Tank Engine Color Block Cotton Hand ... - Amazon.com", "snippet": "Buy Thomas The Tank Engine Color Block Cotton Hand Towel: Home & Kitchen - \nAmazon.com \u2713 FREE DELIVERY possible on eligible purchases."}, {"source": "ttte.fandom.com", "title": "Thomas the Tank Engine Colors", "snippet": "Thomas the Tank Engine: Colors is a book. Characters Thomas, Edward, Henry, \nJames, Percy, Bill..."}, {"source": "www.pinterest.com", "title": "Train cake, Thomas train cake, Thomas the train", "snippet": "Fondant Train Topper with Mini Train Cupcake Toppers. Each Topper is made to \norder and can be customized to suit your color scheme. Lot comes\u00a0..."}], "latency": 663}, {"system": "Bing entity", "response": [], "latency": 698}, {"system": "Bing web", "response": [{"source": "www.youtube.com", "title": "What Color Was Thomas the Tank Engine? | The Earl's Quiz ...", "snippet": "Based on a series of children's books, \"Thomas & Friends\" features Thomas the Tank Engine adventures with other locomotives on the island of Sodor. Thomas often gets into trouble, but never gives ..."}, {"source": "british-learning.com", "title": "Thomas The Train Color Pages To Print \u2013 Learning How to Read", "snippet": "Thomas and friends coloring pages 55 thomas and friends pictures to print and color. 55 thomas and friends printable coloring pages for kids. 30 free printable thomas the train coloring pages. For boys and girls kids and adults teenagers and toddlers preschoolers and older kids at school."}, {"source": "www.hometalk.com", "title": "Does anybody know what color blue is used for Thomas the ...", "snippet": "Here is a step by step YouTube guide to painting Thomas The Tank Engine and midway through, the blue used is referred to as a medium blue. Lighter than Navy, darker than Sky, maybe like a colonial blue? https://www.youtube.com/watch?v=MU8L6tIHk08"}], "latency": 879}], "dt": "2020-08-14T15:06:39.638346+00:00"}` we observe difference in the prediction of the tenth context. Any reason for that?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6912/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6911
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6911/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6911/comments
https://api.github.com/repos/huggingface/transformers/issues/6911/events
https://github.com/huggingface/transformers/pull/6911
691,351,577
MDExOlB1bGxSZXF1ZXN0NDc4MTA5NDE4
6,911
[s2s]: script to convert pl checkpoints to hf checkpoints
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=h1) Report\n> Merging [#6911](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `2.25%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6911/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6911 +/- ##\n==========================================\n+ Coverage 77.81% 80.06% +2.25% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22452 23102 +650 \n+ Misses 6401 5751 -650 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.37%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=footer). Last update [4ebb52a...87055d8](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6911/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6911", "html_url": "https://github.com/huggingface/transformers/pull/6911", "diff_url": "https://github.com/huggingface/transformers/pull/6911.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6911.patch", "merged_at": 1599140820000 }
https://api.github.com/repos/huggingface/transformers/issues/6910
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6910/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6910/comments
https://api.github.com/repos/huggingface/transformers/issues/6910/events
https://github.com/huggingface/transformers/issues/6910
691,314,741
MDU6SXNzdWU2OTEzMTQ3NDE=
6,910
adding additional additional_special_tokens to tokenizer has inconsistent behavior
{ "login": "andifunke", "id": 18445361, "node_id": "MDQ6VXNlcjE4NDQ1MzYx", "avatar_url": "https://avatars.githubusercontent.com/u/18445361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andifunke", "html_url": "https://github.com/andifunke", "followers_url": "https://api.github.com/users/andifunke/followers", "following_url": "https://api.github.com/users/andifunke/following{/other_user}", "gists_url": "https://api.github.com/users/andifunke/gists{/gist_id}", "starred_url": "https://api.github.com/users/andifunke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andifunke/subscriptions", "organizations_url": "https://api.github.com/users/andifunke/orgs", "repos_url": "https://api.github.com/users/andifunke/repos", "events_url": "https://api.github.com/users/andifunke/events{/privacy}", "received_events_url": "https://api.github.com/users/andifunke/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,604
1,604
NONE
null
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-5.4.0-45-generic-x86_64-with-debian-bullseye-sid - Python version: 3.6.10 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @mfuntowicz ## Information affected: all tokenizers based on `SpecialTokensMixin` The behavior of the `add_special_tokens()` method seems irregular to me, when adding `additional_special_tokens` to a tokenizer that already holds a list of `additional_special_tokens`. In this case the value of `self._additional_special_tokens` will simply be replaced, while the previous additional special tokens still remain in `PreTrainedTokenizer.added_tokens_encoder` and `PreTrainedTokenizer.added_tokens_decoder`. ## To reproduce Steps to reproduce the behavior: ```python from transformers import GPT2Tokenizer def print_special_tokens(): print(tokenizer.all_special_tokens) print(tokenizer.all_special_ids) print(tokenizer.additional_special_tokens) print(tokenizer.additional_special_tokens_ids) print(tokenizer.special_tokens_map) print(tokenizer.added_tokens_encoder) print(tokenizer.added_tokens_decoder) tokenizer = GPT2Tokenizer.from_pretrained( 'gpt2', pad_token='[PAD]', additional_special_tokens=['<A>', '<B>', '<C>'] ) print_special_tokens() tokenizer.add_special_tokens({ 'cls_token': '[CLS]', 'additional_special_tokens': ['<B>', '<X>', '<X>'] }) print('-'*50) print_special_tokens() ``` Output: ``` ['<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '[PAD]', '<A>', '<B>', '<C>'] [50256, 50256, 50256, 50257, 50258, 50259, 50260] ['<A>', '<B>', '<C>'] [50258, 50259, 50260] {'bos_token': '<|endoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<|endoftext|>', 'pad_token': '[PAD]', 'additional_special_tokens': "['<A>', '<B>', '<C>']"} {'[PAD]': 50257, '<A>': 50258, '<B>': 50259, '<C>': 50260} {50257: '[PAD]', 50258: '<A>', 50259: '<B>', 50260: '<C>'} -------------------------------------------------- ['<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '[PAD]', '[CLS]', '<B>', '<X>'] [50256, 50256, 50256, 50257, 50261, 50259, 50262] ['<B>', '<X>', '<X>'] [50259, 50262, 50262] {'bos_token': '<|endoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<|endoftext|>', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'additional_special_tokens': "['<B>', '<X>', '<X>']"} {'[PAD]': 50257, '<A>': 50258, '<B>': 50259, '<C>': 50260, '[CLS]': 50261, '<X>': 50262} {50257: '[PAD]', 50258: '<A>', 50259: '<B>', 50260: '<C>', 50261: '[CLS]', 50262: '<X>'} ``` ## Expected behavior Additional special tokens added by `add_special_tokens()` should be appended to the existing `_additional_special_tokens` list and not replace them. Also, there should be some deduplication logic. The following code change in `SpecialTokensMixin.add_special_tokens()` will do exactly this: ```python for key, value in special_tokens_dict.items(): assert key in self.SPECIAL_TOKENS_ATTRIBUTES, f"Key {key} is not a special token" if key == "additional_special_tokens": assert isinstance(value, (list, tuple)) and all( isinstance(t, (str, AddedToken)) for t in value ), f"Tokens {value} for key {key} should all be str or AddedToken instances" if self.verbose: logger.info("Adding %s to `additional_special_tokens`", value) for token in value: if token not in self.additional_special_tokens: self._additional_special_tokens.append(token) added_tokens += self.add_tokens(value, special_tokens=True) else: assert isinstance( value, (str, AddedToken) ), f"Token {value} for key {key} should be a str or an AddedToken instance" if self.verbose: logger.info("Assigning %s to the %s key of the tokenizer", value, key) setattr(self, key, value) added_tokens += self.add_tokens([value], special_tokens=True) ``` Now, when running the above code example the output is as expected (imho): ``` ['<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '[PAD]', '<A>', '<B>', '<C>'] [50256, 50256, 50256, 50257, 50258, 50259, 50260] ['<A>', '<B>', '<C>'] [50258, 50259, 50260] {'bos_token': '<|endoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<|endoftext|>', 'pad_token': '[PAD]', 'additional_special_tokens': "['<A>', '<B>', '<C>']"} {'[PAD]': 50257, '<A>': 50258, '<B>': 50259, '<C>': 50260} {50257: '[PAD]', 50258: '<A>', 50259: '<B>', 50260: '<C>'} -------------------------------------------------- ['<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '[PAD]', '[CLS]', '<A>', '<B>', '<C>', '<X>'] [50256, 50256, 50256, 50257, 50261, 50258, 50259, 50260, 50262] ['<A>', '<B>', '<C>', '<X>'] [50258, 50259, 50260, 50262] {'bos_token': '<|endoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<|endoftext|>', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'additional_special_tokens': "['<A>', '<B>', '<C>', '<X>']"} {'[PAD]': 50257, '<A>': 50258, '<B>': 50259, '<C>': 50260, '[CLS]': 50261, '<X>': 50262} {50257: '[PAD]', 50258: '<A>', 50259: '<B>', 50260: '<C>', 50261: '[CLS]', 50262: '<X>'} ``` I could open a PR if you agree that this is indeed the expected behavior.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6910/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6910/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6909
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6909/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6909/comments
https://api.github.com/repos/huggingface/transformers/issues/6909/events
https://github.com/huggingface/transformers/issues/6909
691,259,406
MDU6SXNzdWU2OTEyNTk0MDY=
6,909
[style] automate reformatting with pre-commit hooks
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I personally wouldn't like having a pre-commit hook change all my commits without me being able to see the end result.\r\nOn my setup, I have a pre-push hook that aborts a push if make quality fails. I think if we had an install script, we could handle both options?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi! bring back this because I think in suggest pre-commit instead of `make ...`\r\n\r\nWith the pre-commit, we can see the results/modifications, like by example:\r\n\r\n`git add .`\r\n`git commit -m \"any\"` **this will run the pre-commit**\r\n- if everything it's ok at the pre-commit pipeline, the commit will be created\r\n- else if he modifies something (like black or style hook) he will not create the commit and change the files\r\n - when this occurs, we can see with git diff what the pre-commit change, or can just use the `--show-diff-on-failure` flag when running pre-commit.\r\n\r\nI think that doesn't need everybody use pre-commit, can use both option (the actual format with running manually `make ...` and also using pre-commit) – but maybe don't make sense because will duplicate things? \r\n\r\nA little setup for pre-commit, i have tested here:\r\n\r\nadd `.pre-commit-config.yaml` - \r\n```yml\r\nrepos:\r\n- repo: https://github.com/psf/black\r\n rev: 22.1.0\r\n hooks:\r\n - id: black\r\n- repo: https://github.com/pycqa/isort\r\n rev: 5.10.1\r\n hooks:\r\n - id: isort\r\n name: isort (python)\r\n- repo: https://github.com/PyCQA/flake8\r\n rev: 4.0.1\r\n hooks:\r\n - id: flake8\r\n- repo: local\r\n hooks:\r\n - id: autogenerate_code\r\n name: autogenerate_code\r\n entry: python setup.py deps_table_update\r\n language: python\r\n types: [python]\r\n pass_filenames: false\r\n - id: extra_style_checks\r\n name: extra_style_checks\r\n entry: make extra_style_checks\r\n language: system\r\n```\r\nNote:\r\n - The hooks _autogenerate_code_ and _extra_style_checks_, can be call using the make command or running the python.\r\n\r\nInstall pre-commit:\r\n`pre-commit install`\r\n\r\nModify src/transformers/activations.py:\r\n```diff\r\n@@ -31,7 +31,8 @@ class NewGELUActivation(nn.Module):\r\n \"\"\"\r\n def forward(self, input: Tensor) -> Tensor:\r\n- return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))\r\n+ return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 /\r\n+ math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))\r\n```\r\n```console\r\n$ git add -u\r\n$ git commit -m \"test pre-commit pipeline\"\r\n\r\nblack....................................................................Failed\r\n- hook id: black\r\n- files were modified by this hook\r\n\r\nreformatted src/transformers/activations.py\r\n\r\nAll done! ✨ 🍰 ✨\r\n1 file reformatted.\r\n\r\nisort (python)...........................................................Passed\r\nflake8...................................................................Passed\r\nautogenerate_code........................................................Passed\r\nextra_style_checks.......................................................Passed\r\n\r\n$ git status\r\nOn branch master\r\nYour branch is up to date with 'origin/master'.\r\n\r\nChanges to be committed:\r\n (use \"git restore --staged <file>...\" to unstage)\r\n modified: src/transformers/activations.py\r\n\r\nChanges not staged for commit:\r\n (use \"git add <file>...\" to update what will be committed)\r\n (use \"git restore <file>...\" to discard changes in working directory)\r\n modified: src/transformers/activations.py\r\n\r\n$ git diff\r\n--- a/src/transformers/activations.py\r\n+++ b/src/transformers/activations.py\r\n@@ -31,8 +31,7 @@ class NewGELUActivation(nn.Module):\r\n \"\"\"\r\n \r\n def forward(self, input: Tensor) -> Tensor:\r\n- return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 /\r\n- math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))\r\n+ return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))\r\n```\r\n\r\n\r\nto show git diff automatically after the pre-commit can add:\r\n```yml\r\n- repo: local\r\n hooks:\r\n - id: git-diff\r\n name: git diff\r\n entry: git diff --exit-code\r\n language: system\r\n pass_filenames: false\r\n always_run: true\r\n```\r\n", "Even though I originally created this thread 1.5 years later I now agree with @sgugger, that I don't want format changes done while pushing - I need to see what has been changed since sometimes the autoformatter messes things up badly and I need to rewrite things to make the end result readable.\r\n\r\nIf this can be done as an option and not a requirement then I'm not against it, but there needs to be a way to validate/reformat files before git is involved.\r\n\r\nBTW, `precommit` can be run manually as well and not via git, which doesn't require `pre-commit install`:\r\n\r\n```\r\npre-commit run --all-files\r\n```\r\n\r\nAnd we have 2 ways to reformat files: `fixup` (fast - only modified files) - `style` (slow)", "yes use pre-commit don't make sense if does not want to always run the pipeline...\r\n\r\nAbout the `fixup` and `style`, i think can be done equal... by default pre-commit will run just in modified files (files at the commit) and if wants to run for all files can do as you shows.\r\nFor me, by default, i think makes sense always just run at modified files. And if the autoformatter messes things we can see, and if we prefer not to use some hook (like the autoformatter that have messed up something), by example run again with `SKIP=black ...`\r\n\r\nAnd the pre-commit tool will not let the commit be created if something fails, if the dev wants “force” the failed hook will need to add the `SKIP=hook ...` before the commit command", "(i personally agree with @sgugger that local hooks are best left as user-level tooling)" ]
1,599
1,647
1,604
CONTRIBUTOR
null
# 🚀 Feature request I was just reading how `make style` can be automated with pre-commit hooks. Noticing how often I run and even more often forget to run `make style` before committing, perhaps others are in the same boat - and therefore I thought to propose to the dev community to (mostly) automate this process. The only cons is that each dev will still have to run `pre-commit install` once after cloning the project. This is a security feature of git, so it won't run anything automatically unless you take action to enable such thing. If I understand it correctly, if an individual dev doesn't run `pre-commit install` inside the repo, things are just as normal as they are now. That dev will just run `make style` manually. i.e. the proposed feature is optional for those who want it. I read about it [here](https://www.mattlayman.com/blog/2018/python-code-black/), please scroll down to the section: "Black as a Git pre-commit hook". And it links to the whole detailed website: https://pre-commit.com/
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6909/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6908
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6908/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6908/comments
https://api.github.com/repos/huggingface/transformers/issues/6908/events
https://github.com/huggingface/transformers/pull/6908
691,164,378
MDExOlB1bGxSZXF1ZXN0NDc3OTQwMTM4
6,908
Funnel transformer
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=h1) Report\n> Merging [#6908](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0f360d3d1c606d6d79cdf1efa53c3d719249573d?el=desc) will **increase** coverage by `0.71%`.\n> The diff coverage is `87.71%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6908/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6908 +/- ##\n==========================================\n+ Coverage 80.23% 80.95% +0.71% \n==========================================\n Files 161 164 +3 \n Lines 30119 30925 +806 \n==========================================\n+ Hits 24167 25035 +868 \n+ Misses 5952 5890 -62 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/commands/convert.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9jb252ZXJ0LnB5) | `26.98% <20.00%> (-0.61%)` | :arrow_down: |\n| [src/transformers/modeling\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mdW5uZWwucHk=) | `86.76% <86.76%> (ø)` | |\n| [src/transformers/tokenization\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `97.67% <97.67%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.31% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.47% <100.00%> (+0.14%)` | :arrow_up: |\n| [src/transformers/configuration\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Z1bm5lbC5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.97% <100.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.87% <100.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=footer). Last update [0f360d3...8c684cc](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Awesome! The model seems quite complex so I didn't really understand all the functionality. \r\nA couple of things from my side:\r\n\r\n1) IMO, it's super useful to have hard coded integration tests in the test file which makes the model a lot easier to maintain (every change can quickly be checked by making sure the model stays mathematically equivalent).\r\n\r\n2) I guess a couple of comments and assert statements would be nice to make the code a bit easier to understand\r\n\r\n3) Personally, I don't like single letter variables. Search replace commands don't work on such variables and it is very difficult to understand what they mean. ", "Thanks for all the comments. I think I replied/addressed all of them except the fast small integration tests, which are going to take a bit more work (starting on this now). Let me know if I missed anything since there are a lot of comments!", "All checkpoints uploaded so I updated the incomplete lists. Also added mention of the model in all indexes, the model summary and the big table of pretrained models (sorry about the diff on that file, Funnel Transformer is one character too long and required to add an extra space on every line).\r\n\r\nShould be good to merge at the beginning of next week!", "@sgugger although you've named the models \"`funnel-base`\", \"`funnel-medium`\" so on so forth, the paper talks about all this in a different format, could a docstring be added saying `funnel-base` is `B4-4-4H768` and same for the rest. If someone wants to replicate the papers' results that would be great.\r\n\r\nedit: my bad, its there in the comments next to the model name, but still would be better in a docstring too. Sorry!\r\n" ]
1,599
1,599
1,599
COLLABORATOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #4844 This PR adds the Funnel Transformer architecture in PyTorch. For now, I have uploaded two of the ten checkpoints for this model, I will convert and upload the other ones while this PR is under review and add them before it's merged. Due to the fact there are two versions of the Funnel model (one that returns hidden states with a sequence length divided by 4 and one that return hidden states with the same sequence length, that has 2 more layers), I add to make to different Tester in the test file (because the expected number of hidden states / attentions change depending on which model is used). I adapted the script that check all models are tested to account for that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6908/reactions", "total_count": 12, "+1": 0, "-1": 0, "laugh": 0, "hooray": 4, "confused": 0, "heart": 4, "rocket": 4, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6908/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6908", "html_url": "https://github.com/huggingface/transformers/pull/6908", "diff_url": "https://github.com/huggingface/transformers/pull/6908.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6908.patch", "merged_at": 1599566888000 }
https://api.github.com/repos/huggingface/transformers/issues/6907
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6907/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6907/comments
https://api.github.com/repos/huggingface/transformers/issues/6907/events
https://github.com/huggingface/transformers/pull/6907
691,120,792
MDExOlB1bGxSZXF1ZXN0NDc3OTAyODg5
6,907
Torchscript benchmark measure
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Results for 1):\r\n\r\n```\r\n1 / 1\r\n\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: multiple - Script: True 500 128 2.575 \r\nType: multiple - Script: True 500 512 3.898 \r\nType: multiple - Script: True 2500 128 13.173 \r\nType: multiple - Script: True 2500 512 18.263 \r\n--------------------------------------------------------------------------------\r\n1 / 1\r\n\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: multiple - Script: False 500 128 3.733 \r\nType: multiple - Script: False 500 512 3.857 \r\nType: multiple - Script: False 2500 128 19.101 \r\nType: multiple - Script: False 2500 512 19.356 \r\n--------------------------------------------------------------------------------\r\n```\r\n\r\nFor the smaller sequence length 128 we can see a significant speed-up (~30%) - for the longer sequence length 512, the speed-up is much smaller (and only for the bigger list of inputs).", "Results for 2)\r\n\r\n\r\n```\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n Type: batched - Script: True 512 128 0.819 \r\n Type: batched - Script: True 512 512 3.769 \r\n Type: batched - Script: True 4096 128 6.705 \r\n Type: batched - Script: True 4096 512 26.549 \r\n--------------------------------------------------------------------------------\r\n1 / 1\r\n\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: batched - Script: False 512 128 0.837 \r\nType: batched - Script: False 512 512 3.88 \r\nType: batched - Script: False 4096 128 6.75 \r\nType: batched - Script: False 4096 512 27.162 \r\n--------------------------------------------------------------------------------\r\n```\r\n\r\nHere no clear speed gains can be seen. ", "I'm not sure I understand all the interactions in the benchmarking framework, but I think in line 9 (non-script model) we should be returning torch.jit.trace(model, sample_input), not the untraced model. And the sample input would have be max_length for it to work. That's were most of the gain comes from.\r\nThen the comparison is between using torch.jit.trace() and torch.jit.script(). Or maybe I'm missing some code that does that elsewhere? \r\n\r\n", "Okey, yeah that makes sense! I changed the benchmarking script accordingly and have the following results now: \r\n\r\n1)\r\n```\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: multiple - Script: True 500 128 1.793 \r\nType: multiple - Script: True 500 512 3.628 \r\nType: multiple - Script: True 2500 128 8.774 \r\nType: multiple - Script: True 2500 512 19.471 \r\n--------------------------------------------------------------------------------\r\n1 / 1\r\n\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: multiple - Trace: True 500 128 1.83 \r\nType: multiple - Trace: True 500 512 3.783 \r\nType: multiple - Trace: True 2500 128 9.083 \r\nType: multiple - Trace: True 2500 512 20.569 \r\n--------------------------------------------------------------------------------\r\n```\r\n\r\nand \r\n\r\n2) \r\n```\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n Type: batched - Script: True 512 128 1.043 \r\n Type: batched - Script: True 512 512 4.913 \r\n Type: batched - Script: True 4096 128 8.499 \r\n Type: batched - Script: True 4096 512 34.187 \r\n--------------------------------------------------------------------------------\r\n1 / 1\r\n\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\nType: batched - Trace: True 512 128 1.046 \r\nType: batched - Trace: True 512 512 4.916 \r\nType: batched - Trace: True 4096 128 8.042 \r\nType: batched - Trace: True 4096 512 30.874 \r\n--------------------------------------------------------------------------------\r\n```\r\n\r\n=> So my understanding is now that `torch.trace(...)` is much more efficient for dynamic input shapes than not using torch.jit at all, but I also don't see how `torch.script(...)` is better than `torch.trace(...)`. If our models are compatible with `torch.trace(...)`, why do we need to have a model that is compatible with `torch.script(...)`? It is definitely more convenient to just call `torch.trace(model)` without having to provide any `input_ids`, but I'm not 100% sure whether it's worth a huge refactoring. \r\n\r\nalso cc @sgugger @LysandreJik ", "We saw different behavior in our experiments a few months ago. Will try to reproduce and update here.", "> We saw different behavior in our experiments a few months ago. Will try to reproduce and update here.\r\n\r\nWas `torch.script()` much faster than `torch.trace()` in your experiments?", "In our experiments, using trace(model, example_input) would result in a model that would only accept a sequence of the same length as example_sequence, whereas script(model) had no such restriction. This is the case mentioned in your documentation here: https://huggingface.co/transformers/torchscript.html#dummy-inputs-and-standard-lengths\r\n\r\nWhat that meant in practice is that you needed to trace with an example sequence of length = max_length, and then pad every example of length < max_length with zeros. Since the speed of the model is basically linear in the sequence length, for a set of inputs with varying sequence lengths we got a speed up of avg_len/max_length by using script() instead of trace().\r\n\r\nUpon further investigation, it looks like when we ran these experiments, several months ago, we were using Torch 1.2. It looks like in Torch 1.3 the fixed-length problem is no longer an issue for your BERT models (we still encounter it with other models architectures we build). So there's no longer a big speed gain from script() vs trace().\r\n\r\nThere are still some good reasons for preferring script() to trace() - scripting is guaranteed to capture the model codepath logic, whereas tracing might miss a logic branch if the example input doesn't flow through it. Also, currently tracing your models produces several warnings like the one below. But I'm not sure if those on their own are enough of a motivation to make major changes in your code base.\r\n```\r\nTracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n```", "> In our experiments, using trace(model, example_input) would result in a model that would only accept a sequence of the same length as example_sequence, whereas script(model) had no such restriction. This is the case mentioned in your documentation here: https://huggingface.co/transformers/torchscript.html#dummy-inputs-and-standard-lengths\r\n> \r\n> What that meant in practice is that you needed to trace with an example sequence of length = max_length, and then pad every example of length < max_length with zeros. Since the speed of the model is basically linear in the sequence length, for a set of inputs with varying sequence lengths we got a speed up of avg_len/max_length by using script() instead of trace().\r\n> \r\n> Upon further investigation, it looks like when we ran these experiments, several months ago, we were using Torch 1.2. It looks like in Torch 1.3 the fixed-length problem is no longer an issue for your BERT models (we still encounter it with other models architectures we build). So there's no longer a big speed gain from script() vs trace().\r\n> \r\n> There are still some good reasons for preferring script() to trace() - scripting is guaranteed to capture the model codepath logic, whereas tracing might miss a logic branch if the example input doesn't flow through it. Also, currently tracing your models produces several warnings like the one below. But I'm not sure if those on their own are enough of a motivation to make major changes in your code base.\r\n> \r\n> ```\r\n> TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n> ```\r\n\r\n@sgugger - what are your thoughts on this? ", "I think adding the scriptable layers seems cleaner to make sure everything works right with scripting/tracing. Not the approach in this PR but the other linked in a comment (@sbrody18 I don't know if you saw my PR to rebase on master for this branch). It ends up with most changes being helpful to read the code (type annotations and asserts) and a few extra classes for the scriptable layers but not much added code.", "@sgugger I agree - I think the extra benefit of the type and None-checking is really helpful to prevent bugs and makes the code better.\r\nI saw your PR late Friday and didn't have time to look into it. Will try to do so by end of day.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,605
1,605
MEMBER
null
This PR is just there to show some benchmarking results of `BertScriptableModel` vs. `BertModel`. It shows the results of running the script: `benchmark_pytorch_scripting.py`. In a nutshell, the script does the following: 1) Create a list of 500 and 2500 `input_tensors` of `batch_size` 1 with a sequence length varying between 1 and 128 or 1 and 512. Then take a scripted model `model = torch.jit.script(BertScriptableModel(...))` and loop over all 500 / 2500 `input_tensors` in a standard for loop. The script model is warmed up by running the loop 5 times before measuring the time. The loop is run 10 times and the fastest run is taken as a measurement. 2) Create a list of 64 and 512 input_tensors of batch_size 8 with a sequence length varying between 1 and 128 or 1 and 512. Then take a scripted model `model = torch.jit.script(BertScriptableModel(...))` and loop over all 64 / 512 `input_tensors` in a standard for loop. The script model is warmed up by running the loop 5 times before measuring the time. The loop is run 10 times and the fastest run is taken as a measurement. All this was done on the following environment information: ``` ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 3.0.0 - framework: PyTorch - use_torchscript: True - framework_version: 1.6.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-09-02 - time: 16:26:10.562635 - fp16: False - use_multiprocessing: False - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False ``` => So only on GPU. To run this script, one can simply run: ``` ./benchmark_pytorch_scripting.py ``` **Important**: The "for" loop corresponds to the function defined in lines 32 - 37 of the file `benchmark_pytorch_scripting.py`. This function then overwrites the function that is usually measured in benchmarks, by setting `benchmark._prepare_inference_func = _prepare_inference_func` in line 49. It would be awesome if @sbrody18 could take a look at the `benchmark_pytorch_scripting.py` f file to check if torchscript was used correctly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6907/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6907", "html_url": "https://github.com/huggingface/transformers/pull/6907", "diff_url": "https://github.com/huggingface/transformers/pull/6907.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6907.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6906
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6906/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6906/comments
https://api.github.com/repos/huggingface/transformers/issues/6906/events
https://github.com/huggingface/transformers/pull/6906
691,069,211
MDExOlB1bGxSZXF1ZXN0NDc3ODU5ODMy
6,906
Update to the huBERT model card.
{ "login": "DavidNemeskey", "id": 690386, "node_id": "MDQ6VXNlcjY5MDM4Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/690386?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DavidNemeskey", "html_url": "https://github.com/DavidNemeskey", "followers_url": "https://api.github.com/users/DavidNemeskey/followers", "following_url": "https://api.github.com/users/DavidNemeskey/following{/other_user}", "gists_url": "https://api.github.com/users/DavidNemeskey/gists{/gist_id}", "starred_url": "https://api.github.com/users/DavidNemeskey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DavidNemeskey/subscriptions", "organizations_url": "https://api.github.com/users/DavidNemeskey/orgs", "repos_url": "https://api.github.com/users/DavidNemeskey/repos", "events_url": "https://api.github.com/users/DavidNemeskey/events{/privacy}", "received_events_url": "https://api.github.com/users/DavidNemeskey/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,599
1,599
1,599
CONTRIBUTOR
null
Added a link to the thesis.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6906/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6906", "html_url": "https://github.com/huggingface/transformers/pull/6906", "diff_url": "https://github.com/huggingface/transformers/pull/6906.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6906.patch", "merged_at": 1599139204000 }
https://api.github.com/repos/huggingface/transformers/issues/6905
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6905/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6905/comments
https://api.github.com/repos/huggingface/transformers/issues/6905/events
https://github.com/huggingface/transformers/pull/6905
691,049,877
MDExOlB1bGxSZXF1ZXN0NDc3ODQzNzYx
6,905
Changed link to the correct paper in the second paragraph
{ "login": "sengl", "id": 932061, "node_id": "MDQ6VXNlcjkzMjA2MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/932061?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sengl", "html_url": "https://github.com/sengl", "followers_url": "https://api.github.com/users/sengl/followers", "following_url": "https://api.github.com/users/sengl/following{/other_user}", "gists_url": "https://api.github.com/users/sengl/gists{/gist_id}", "starred_url": "https://api.github.com/users/sengl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sengl/subscriptions", "organizations_url": "https://api.github.com/users/sengl/orgs", "repos_url": "https://api.github.com/users/sengl/repos", "events_url": "https://api.github.com/users/sengl/events{/privacy}", "received_events_url": "https://api.github.com/users/sengl/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=h1) Report\n> Merging [#6905](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8f2723caf0f1bf7e1f639d28d004f81c96d19bbc?el=desc) will **decrease** coverage by `0.12%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6905/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6905 +/- ##\n==========================================\n- Coverage 79.81% 79.69% -0.13% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n- Hits 23029 22994 -35 \n- Misses 5824 5859 +35 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `89.97% <0.00%> (-4.07%)` | :arrow_down: |\n| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=footer). Last update [8f2723c...0037bd4](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thx for fixing this!" ]
1,599
1,599
1,599
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6905/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6905/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6905", "html_url": "https://github.com/huggingface/transformers/pull/6905", "diff_url": "https://github.com/huggingface/transformers/pull/6905.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6905.patch", "merged_at": 1599139422000 }
https://api.github.com/repos/huggingface/transformers/issues/6904
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6904/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6904/comments
https://api.github.com/repos/huggingface/transformers/issues/6904/events
https://github.com/huggingface/transformers/issues/6904
690,987,887
MDU6SXNzdWU2OTA5ODc4ODc=
6,904
Greedy decoding for non-beam-search appears to ignore postprocessing
{ "login": "alexeyr", "id": 24733, "node_id": "MDQ6VXNlcjI0NzMz", "avatar_url": "https://avatars.githubusercontent.com/u/24733?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexeyr", "html_url": "https://github.com/alexeyr", "followers_url": "https://api.github.com/users/alexeyr/followers", "following_url": "https://api.github.com/users/alexeyr/following{/other_user}", "gists_url": "https://api.github.com/users/alexeyr/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexeyr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexeyr/subscriptions", "organizations_url": "https://api.github.com/users/alexeyr/orgs", "repos_url": "https://api.github.com/users/alexeyr/repos", "events_url": "https://api.github.com/users/alexeyr/events{/privacy}", "received_events_url": "https://api.github.com/users/alexeyr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Didn't realize that `postprocess_next_token_scores` mutates its argument." ]
1,599
1,599
1,599
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-4.4.0-18362-Microsoft-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> TextGeneration: @TevenLeScao ## Information Model I am using (Bert, XLNet ...): Bart The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) I was experimenting with `generate` method to understand behavior on small cases. ## To reproduce See https://github.com/huggingface/transformers/blob/8f2723caf0f1bf7e1f639d28d004f81c96d19bbc/src/transformers/generation_utils.py#L535-L566 The last line should probably take `argmax` of post-processed `scores` instead of `next_token_logits`. This should manifest in not respecting minimum length, generating bad words and repeats. On a more minor note, are `next_token_logscores` really _log_ scores? <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6904/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6903
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6903/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6903/comments
https://api.github.com/repos/huggingface/transformers/issues/6903/events
https://github.com/huggingface/transformers/pull/6903
690,960,795
MDExOlB1bGxSZXF1ZXN0NDc3NzY5NzE1
6,903
Output attention takes an s
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=h1) Report\n> Merging [#6903](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/485da7222f7f9ca9854db1a6df027b00d348d017?el=desc) will **increase** coverage by `0.29%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6903/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6903 +/- ##\n==========================================\n+ Coverage 79.30% 79.59% +0.29% \n==========================================\n Files 157 157 \n Lines 28853 28853 \n==========================================\n+ Hits 22882 22966 +84 \n+ Misses 5971 5887 -84 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (ø)` | |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.64% <ø> (+0.67%)` | :arrow_up: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `85.18% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.73% <ø> (ø)` | |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.84% <ø> (ø)` | |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `66.86% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <ø> (-34.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <ø> (+0.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <ø> (ø)` | |\n| ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=footer). Last update [485da72...e8fd79c](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
COLLABORATOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #6902 Stas would have come with a nice Perl magic command but I did a regex search (`output_attention[^s]`) to fix all those misspelled args. In the process, I noticed a few examples were missing a line so added that too.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6903/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6903/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6903", "html_url": "https://github.com/huggingface/transformers/pull/6903", "diff_url": "https://github.com/huggingface/transformers/pull/6903.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6903.patch", "merged_at": 1599048705000 }
https://api.github.com/repos/huggingface/transformers/issues/6902
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6902/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6902/comments
https://api.github.com/repos/huggingface/transformers/issues/6902/events
https://github.com/huggingface/transformers/issues/6902
690,940,905
MDU6SXNzdWU2OTA5NDA5MDU=
6,902
Example config code uses invalid 'output_attention' rather than 'output_attentions'
{ "login": "lannelin", "id": 26149456, "node_id": "MDQ6VXNlcjI2MTQ5NDU2", "avatar_url": "https://avatars.githubusercontent.com/u/26149456?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lannelin", "html_url": "https://github.com/lannelin", "followers_url": "https://api.github.com/users/lannelin/followers", "following_url": "https://api.github.com/users/lannelin/following{/other_user}", "gists_url": "https://api.github.com/users/lannelin/gists{/gist_id}", "starred_url": "https://api.github.com/users/lannelin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lannelin/subscriptions", "organizations_url": "https://api.github.com/users/lannelin/orgs", "repos_url": "https://api.github.com/users/lannelin/repos", "events_url": "https://api.github.com/users/lannelin/events{/privacy}", "received_events_url": "https://api.github.com/users/lannelin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for reporting! The PR mentioned above should fix all of those." ]
1,599
1,599
1,599
NONE
null
Looks like documentation only bug. `output_attention` used rather than `output_attentions`. Occurs in multiple places in repo. Maybe linked to #2985 ## Environment info - `transformers` version: 3.1.0 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.7.4 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help documentation: @sgugger ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: From docs L4-L5 of AutoConfig example: [docs](https://huggingface.co/transformers/model_doc/auto.html#transformers.AutoConfig) or [code](https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_auto.py#L264) ```python config = AutoConfig.from_pretrained('bert-base-uncased', output_attention=True, foo=False) assert config.output_attention == True ``` causes: ``` AttributeError: 'BertConfig' object has no attribute 'output_attention' ``` ## Expected behavior The assertion given in documentation passes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6902/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6901
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6901/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6901/comments
https://api.github.com/repos/huggingface/transformers/issues/6901/events
https://github.com/huggingface/transformers/issues/6901
690,773,288
MDU6SXNzdWU2OTA3NzMyODg=
6,901
Relaxing `PreTrainedModel` requirement in _save
{ "login": "prajjwal1", "id": 24690051, "node_id": "MDQ6VXNlcjI0NjkwMDUx", "avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prajjwal1", "html_url": "https://github.com/prajjwal1", "followers_url": "https://api.github.com/users/prajjwal1/followers", "following_url": "https://api.github.com/users/prajjwal1/following{/other_user}", "gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}", "starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions", "organizations_url": "https://api.github.com/users/prajjwal1/orgs", "repos_url": "https://api.github.com/users/prajjwal1/repos", "events_url": "https://api.github.com/users/prajjwal1/events{/privacy}", "received_events_url": "https://api.github.com/users/prajjwal1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I don't see anything blocking with this. Wdyt @sgugger @julien-c ?", "We can give a warning but then the rest of the method will fail. Are you thinking of aborting the save entirely for models that are not `PretrainedModel`s? Also, why are you not inheriting from `PretrainedModel` in your example? Is there something limiting?\r\n\r\nNote that Trainer is not supposed to be a generic training loop, but we can surely make it a bit more flexible.", "Yes, `Trainer` is not a general loop, but it works for custom models as I've tried. Majority of its parts are generalized. `PreTrainedModel` also inherits from `nn.Module`, so users can do that, although its quite common for users to inherit from `nn.Module` directly. I'm not sure how the method will fail ? We can just add a warning instead of raising a `ValueError`. The reason why I'm saying is that users would want to do more than just what `transformers` provide out of the box (for instance justing using `AutoModel` and not `SequenceClassification` models (I'm seeing a growing interest in using such models). I think `nlp` is heading towards that direction (making everything general). This works fine for all cases, I guess:\r\n```\r\nfrom types import MethodType\r\n\r\ndef _save(self, output_dir: Optional[str] = None):\r\n output_dir = output_dir if output_dir is not None else self.args.output_dir\r\n os.makedirs(output_dir, exist_ok=True)\r\n logger.info(\"Saving model checkpoint to %s\", output_dir)\r\n\r\n torch.save(\r\n {\"model_state_dict\": self.model.state_dict()},\r\n os.path.join(output_dir, \"pytorch_model.bin\"),\r\n )\r\n\r\n # Good practice: save your training arguments together with the trained model\r\n torch.save(self.args, os.path.join(output_dir, \"training_args.bin\"))\r\n\r\ntrainer._save = MethodType(_save, trainer)\r\n```\r\nWhere do you think the approach may not work ? After providing the warning, its upto users if they further want to make changes by overriding this method (they would know that `transformers` is not responsible anymore since its not a `PreTrainedModel`. Current method completely breaks the training due to `ValueError`.\r\nThis is optional, I felt that it would be useful to have. I'll open a PR if you approve.", "`save_pretrained` does more than the method you mention, but we could refactor the code inside to work with all models probably. I don't see any place it uses specific stuff from `PretrainedModel`. The thing we don't want is to add and maintain too generic code, but if it's easy enough I see no objection.\r\n\r\nYou didn't tell me why subclassing `PreTrainedModel` did not work however ;-) That is what I would expect a user building a custom model using transformers to do .", "The `PreTrainedModel` is a generic class amongst all models in `transformers`, all classes pertaining to it comply in terms of the methods it provides and can use functionalities such as `init_weights`, `prune_heads`. They might not work for custom models. For instance, some methods require `.config.` attribute which custom models may not directly have. I guess one can define their custom model to be exactly what `PreTrainedModel` requires them to be (haven't looked into that), but that would be asking users to read through what `PreTrainedModel` expects or maybe specifying in docs. It's totally up to you what you expect the users to do in case they use custom models.", "After some internal discussion with @julien-c we will lower the requirement from `PreTrainedModel` to some lower abstractclass/protocol so the user knows exactly what they have to implement for their model to work seamlessly with `Trainer`. I will work on this end of this week beginning of next. ", "Sounds good. I'll look forward to that part then." ]
1,599
1,601
1,601
CONTRIBUTOR
null
# 🚀 Feature request It's great to see that `Trainer` is becoming flexible. Each functions seems to be more self contained now making inheritance easier. I've experimented with many custom models. For instance, ``` class Model(nn.Module): def __init__(self, ..): self.encoder = AutoModel.from_pretrained(..) self.custom_modules = .. def forward(self, **kwargs): output = self.encoder(**kwargs) # some custom operations ``` Many users are required to create custom models if they just don't want simple `SequenceClassification` head. In all cases, I have to override `_save` method because of [this line](https://github.com/huggingface/transformers/blob/d822ab636b6a14ed50f7bca0797c1de42c19de61/src/transformers/trainer.py#L1097) which explicitly puts a restriction on `Trainer` to be used with models that inherit from `PreTrainedModel`. It would be good to relax this requirement and give a warning about not using `PreTrainedModel` instead. ## Your contribution I'll open a PR if I get approval.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6901/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6900
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6900/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6900/comments
https://api.github.com/repos/huggingface/transformers/issues/6900/events
https://github.com/huggingface/transformers/issues/6900
690,768,192
MDU6SXNzdWU2OTA3NjgxOTI=
6,900
Can DistilBert.forward() support token_type_ids ?
{ "login": "Yusifu", "id": 28774881, "node_id": "MDQ6VXNlcjI4Nzc0ODgx", "avatar_url": "https://avatars.githubusercontent.com/u/28774881?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yusifu", "html_url": "https://github.com/Yusifu", "followers_url": "https://api.github.com/users/Yusifu/followers", "following_url": "https://api.github.com/users/Yusifu/following{/other_user}", "gists_url": "https://api.github.com/users/Yusifu/gists{/gist_id}", "starred_url": "https://api.github.com/users/Yusifu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yusifu/subscriptions", "organizations_url": "https://api.github.com/users/Yusifu/orgs", "repos_url": "https://api.github.com/users/Yusifu/repos", "events_url": "https://api.github.com/users/Yusifu/events{/privacy}", "received_events_url": "https://api.github.com/users/Yusifu/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "DistilBERT can support sentence pair-like inputs but does not make use of token type IDs. It detects sentence pairs according to the special tokens. cc @VictorSanh ", "@Yusifu Did you find a solution for this problem? I'm also doing sentence-pair classification (NLI) with Distilbert.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,606
1,606
NONE
null
I am using DistilBert to distill a pretrained Bert model. That is Bert -> DistilBert. The input of Bert is a sentence pair: [CLS] Hello word [SEP] Hello Python [SEP]. But DistilBert dose not support sentence pair inputs. Can DistilBert support sentence pair-like inputs?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6900/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6899
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6899/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6899/comments
https://api.github.com/repos/huggingface/transformers/issues/6899/events
https://github.com/huggingface/transformers/issues/6899
690,710,065
MDU6SXNzdWU2OTA3MTAwNjU=
6,899
Can the GPT2 of Transformers receive output hidden_states from external Encoder?
{ "login": "wulaoshi", "id": 27938964, "node_id": "MDQ6VXNlcjI3OTM4OTY0", "avatar_url": "https://avatars.githubusercontent.com/u/27938964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wulaoshi", "html_url": "https://github.com/wulaoshi", "followers_url": "https://api.github.com/users/wulaoshi/followers", "following_url": "https://api.github.com/users/wulaoshi/following{/other_user}", "gists_url": "https://api.github.com/users/wulaoshi/gists{/gist_id}", "starred_url": "https://api.github.com/users/wulaoshi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wulaoshi/subscriptions", "organizations_url": "https://api.github.com/users/wulaoshi/orgs", "repos_url": "https://api.github.com/users/wulaoshi/repos", "events_url": "https://api.github.com/users/wulaoshi/events{/privacy}", "received_events_url": "https://api.github.com/users/wulaoshi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @wulaoshi - I don't fully understand your question. Could you maybe post such a higher level question on the forum at `discuss.huggingface.co` ? :-) ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,605
1,605
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I want to use the GPT2 receive the output hidden_states from bert to calculate self_attention ,how can I do? Thanks. <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6899/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6898
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6898/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6898/comments
https://api.github.com/repos/huggingface/transformers/issues/6898/events
https://github.com/huggingface/transformers/pull/6898
690,706,463
MDExOlB1bGxSZXF1ZXN0NDc3NTYxNDQ1
6,898
[testing] fix ambiguous test
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=h1) Report\n> Merging [#6898](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d822ab636b6a14ed50f7bca0797c1de42c19de61?el=desc) will **increase** coverage by `1.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6898/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6898 +/- ##\n==========================================\n+ Coverage 79.61% 80.62% +1.00% \n==========================================\n Files 157 157 \n Lines 28826 28826 \n==========================================\n+ Hits 22951 23241 +290 \n+ Misses 5875 5585 -290 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.10% <0.00%> (-3.93%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.97% <0.00%> (-0.68%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <0.00%> (+0.27%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=footer). Last update [d822ab6...6b67e49](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
Since `generate()` does: ``` num_beams = num_beams if num_beams is not None else self.config.num_beams ``` This test fails if `model.config.num_beams > 1` (which is the case in the model I'm porting). This fix makes the test setup unambiguous by passing an explicit `num_beams=1` to `generate()`. Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6898/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6898", "html_url": "https://github.com/huggingface/transformers/pull/6898", "diff_url": "https://github.com/huggingface/transformers/pull/6898.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6898.patch", "merged_at": 1599056297000 }
https://api.github.com/repos/huggingface/transformers/issues/6897
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6897/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6897/comments
https://api.github.com/repos/huggingface/transformers/issues/6897/events
https://github.com/huggingface/transformers/pull/6897
690,688,635
MDExOlB1bGxSZXF1ZXN0NDc3NTQ2OTYy
6,897
Update modeling_bert.py
{ "login": "parthe", "id": 5085600, "node_id": "MDQ6VXNlcjUwODU2MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/5085600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/parthe", "html_url": "https://github.com/parthe", "followers_url": "https://api.github.com/users/parthe/followers", "following_url": "https://api.github.com/users/parthe/following{/other_user}", "gists_url": "https://api.github.com/users/parthe/gists{/gist_id}", "starred_url": "https://api.github.com/users/parthe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/parthe/subscriptions", "organizations_url": "https://api.github.com/users/parthe/orgs", "repos_url": "https://api.github.com/users/parthe/repos", "events_url": "https://api.github.com/users/parthe/events{/privacy}", "received_events_url": "https://api.github.com/users/parthe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=h1) Report\n> Merging [#6897](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d822ab636b6a14ed50f7bca0797c1de42c19de61?el=desc) will **increase** coverage by `0.77%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6897/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6897 +/- ##\n==========================================\n+ Coverage 79.61% 80.39% +0.77% \n==========================================\n Files 157 157 \n Lines 28826 28826 \n==========================================\n+ Hits 22951 23174 +223 \n+ Misses 5875 5652 -223 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <ø> (ø)` | |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `57.29% <0.00%> (-39.79%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.85% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.96% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=footer). Last update [d822ab6...b6c59a1](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,599
1,599
1,599
CONTRIBUTOR
null
outptus -> outputs in example of BertForPreTraining <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6897/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6897", "html_url": "https://github.com/huggingface/transformers/pull/6897", "diff_url": "https://github.com/huggingface/transformers/pull/6897.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6897.patch", "merged_at": 1599043142000 }
https://api.github.com/repos/huggingface/transformers/issues/6896
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6896/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6896/comments
https://api.github.com/repos/huggingface/transformers/issues/6896/events
https://github.com/huggingface/transformers/issues/6896
690,684,295
MDU6SXNzdWU2OTA2ODQyOTU=
6,896
the result of translation task on en-zh is not good,especially in short text
{ "login": "barton-wa", "id": 57668458, "node_id": "MDQ6VXNlcjU3NjY4NDU4", "avatar_url": "https://avatars.githubusercontent.com/u/57668458?v=4", "gravatar_id": "", "url": "https://api.github.com/users/barton-wa", "html_url": "https://github.com/barton-wa", "followers_url": "https://api.github.com/users/barton-wa/followers", "following_url": "https://api.github.com/users/barton-wa/following{/other_user}", "gists_url": "https://api.github.com/users/barton-wa/gists{/gist_id}", "starred_url": "https://api.github.com/users/barton-wa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/barton-wa/subscriptions", "organizations_url": "https://api.github.com/users/barton-wa/orgs", "repos_url": "https://api.github.com/users/barton-wa/repos", "events_url": "https://api.github.com/users/barton-wa/events{/privacy}", "received_events_url": "https://api.github.com/users/barton-wa/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,604
1,604
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6896/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6895
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6895/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6895/comments
https://api.github.com/repos/huggingface/transformers/issues/6895/events
https://github.com/huggingface/transformers/pull/6895
690,676,691
MDExOlB1bGxSZXF1ZXN0NDc3NTM3NDU1
6,895
Create README.md
{ "login": "XiaoqiJiao", "id": 24711193, "node_id": "MDQ6VXNlcjI0NzExMTkz", "avatar_url": "https://avatars.githubusercontent.com/u/24711193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XiaoqiJiao", "html_url": "https://github.com/XiaoqiJiao", "followers_url": "https://api.github.com/users/XiaoqiJiao/followers", "following_url": "https://api.github.com/users/XiaoqiJiao/following{/other_user}", "gists_url": "https://api.github.com/users/XiaoqiJiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/XiaoqiJiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XiaoqiJiao/subscriptions", "organizations_url": "https://api.github.com/users/XiaoqiJiao/orgs", "repos_url": "https://api.github.com/users/XiaoqiJiao/repos", "events_url": "https://api.github.com/users/XiaoqiJiao/events{/privacy}", "received_events_url": "https://api.github.com/users/XiaoqiJiao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,599
1,599
1,599
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6895/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6895", "html_url": "https://github.com/huggingface/transformers/pull/6895", "diff_url": "https://github.com/huggingface/transformers/pull/6895.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6895.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6894
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6894/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6894/comments
https://api.github.com/repos/huggingface/transformers/issues/6894/events
https://github.com/huggingface/transformers/issues/6894
690,520,550
MDU6SXNzdWU2OTA1MjA1NTA=
6,894
Getting import error
{ "login": "tuhinjubcse", "id": 3104771, "node_id": "MDQ6VXNlcjMxMDQ3NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/3104771?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tuhinjubcse", "html_url": "https://github.com/tuhinjubcse", "followers_url": "https://api.github.com/users/tuhinjubcse/followers", "following_url": "https://api.github.com/users/tuhinjubcse/following{/other_user}", "gists_url": "https://api.github.com/users/tuhinjubcse/gists{/gist_id}", "starred_url": "https://api.github.com/users/tuhinjubcse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tuhinjubcse/subscriptions", "organizations_url": "https://api.github.com/users/tuhinjubcse/orgs", "repos_url": "https://api.github.com/users/tuhinjubcse/repos", "events_url": "https://api.github.com/users/tuhinjubcse/events{/privacy}", "received_events_url": "https://api.github.com/users/tuhinjubcse/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "You can try the following changes:\r\n\r\n```python\r\nfrom transformers import BertPreTrainedModel, RobertaModel, ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST, RobertaConfig\r\n\r\nclass RobertaForMD(BertPreTrainedModel): # Metaphor Detection, modified from BertForTokenClassification\r\n config_class = RobertaConfig\r\n pretrained_model_archive_map = ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST\r\n base_model_prefix = \"roberta\"\r\n \r\n def __init__(self, config):\r\n super().__init__(config)\r\n self.num_labels = config.num_labels\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,599
1,605
1,605
NONE
null
``` from transformers import BertPreTrainedModel, RobertaModel import torch class RobertaForMD(BertPreTrainedModel): # Metaphor Detection, modified from BertForTokenClassification def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = RobertaModel(config) self.dropout = torch.nn.Dropout(config.hidden_dropout_prob) self.classifier = torch.nn.Linear(config.hidden_size, self.config.num_labels) # self.loss = torch.nn.BCEWithLogitsLoss(pos_weight=torch.tensor([3], dtype=torch.float32)) self.loss = torch.nn.BCEWithLogitsLoss() self.init_weights() def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, word_posi=None ): outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, ) last_hidden_state = outputs[0] last_hidden_state = self.dropout(last_hidden_state) batch_size = input_ids.shape[0] word_state = torch.empty((0, last_hidden_state.shape[2]), dtype=torch.float32).cuda() for i in range(batch_size): word_state = torch.cat((word_state, last_hidden_state[i][word_posi[i]].unsqueeze(0))) logits = self.classifier(word_state) outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here if labels is not None: loss = self.loss(logits.view(-1), labels.to(torch.float32)) outputs = (loss,) + outputs return outputs # (loss), logits, (hidden_states), (attentions) ``` I am calling this using model = RobertaForMD.from_pretrained(model_name, num_labels=1) Name: transformers Version: 2.7.0 File "main.py", line 276, in main model = RobertaForMD.from_pretrained(model_name, num_labels=1) File "/nas/home/tuhinc/miniconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 438, in from_pretrained **kwargs, File "/nas/home/tuhinc/miniconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 199, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/nas/home/tuhinc/miniconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 269, in get_config_dict raise EnvironmentError(msg) OSError: Can't load 'roberta-large'. Make sure that: - 'roberta-large' is a correct model identifier listed on 'https://huggingface.co/models' - or 'roberta-large' is the correct path to a directory containing a 'config.json' file
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6894/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6893
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6893/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6893/comments
https://api.github.com/repos/huggingface/transformers/issues/6893/events
https://github.com/huggingface/transformers/pull/6893
690,508,745
MDExOlB1bGxSZXF1ZXN0NDc3Mzg2MTIy
6,893
Model card for huBERT
{ "login": "DavidNemeskey", "id": 690386, "node_id": "MDQ6VXNlcjY5MDM4Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/690386?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DavidNemeskey", "html_url": "https://github.com/DavidNemeskey", "followers_url": "https://api.github.com/users/DavidNemeskey/followers", "following_url": "https://api.github.com/users/DavidNemeskey/following{/other_user}", "gists_url": "https://api.github.com/users/DavidNemeskey/gists{/gist_id}", "starred_url": "https://api.github.com/users/DavidNemeskey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DavidNemeskey/subscriptions", "organizations_url": "https://api.github.com/users/DavidNemeskey/orgs", "repos_url": "https://api.github.com/users/DavidNemeskey/repos", "events_url": "https://api.github.com/users/DavidNemeskey/events{/privacy}", "received_events_url": "https://api.github.com/users/DavidNemeskey/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=h1) Report\n> Merging [#6893](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d822ab636b6a14ed50f7bca0797c1de42c19de61?el=desc) will **increase** coverage by `0.46%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6893/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6893 +/- ##\n==========================================\n+ Coverage 79.61% 80.08% +0.46% \n==========================================\n Files 157 157 \n Lines 28826 28826 \n==========================================\n+ Hits 22951 23086 +135 \n+ Misses 5875 5740 -135 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.73% <0.00%> (-19.35%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.44% <0.00%> (-7.59%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=footer). Last update [d822ab6...3979cda](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "👍 " ]
1,599
1,599
1,599
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6893/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6893", "html_url": "https://github.com/huggingface/transformers/pull/6893", "diff_url": "https://github.com/huggingface/transformers/pull/6893.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6893.patch", "merged_at": 1599036610000 }
https://api.github.com/repos/huggingface/transformers/issues/6892
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6892/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6892/comments
https://api.github.com/repos/huggingface/transformers/issues/6892/events
https://github.com/huggingface/transformers/issues/6892
690,499,535
MDU6SXNzdWU2OTA0OTk1MzU=
6,892
[t5] Missing requirements in examples/seq2seq
{ "login": "jeff-da", "id": 24738825, "node_id": "MDQ6VXNlcjI0NzM4ODI1", "avatar_url": "https://avatars.githubusercontent.com/u/24738825?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeff-da", "html_url": "https://github.com/jeff-da", "followers_url": "https://api.github.com/users/jeff-da/followers", "following_url": "https://api.github.com/users/jeff-da/following{/other_user}", "gists_url": "https://api.github.com/users/jeff-da/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeff-da/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeff-da/subscriptions", "organizations_url": "https://api.github.com/users/jeff-da/orgs", "repos_url": "https://api.github.com/users/jeff-da/repos", "events_url": "https://api.github.com/users/jeff-da/events{/privacy}", "received_events_url": "https://api.github.com/users/jeff-da/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Those requirements are all in [here](https://github.com/huggingface/transformers/blob/master/examples/requirements.txt). Are you sure you ran `pip install -r ./examples/requirements.txt` as mentioned in the [README of all examples](https://github.com/huggingface/transformers/tree/master/examples#important-note)?\r\n\r\nThey are not, and won't be requirements of the main library since they are only used for some specific tasks.", "Huh, probably just a local issue then. Thanks!" ]
1,599
1,599
1,599
NONE
null
Hi! This is a very small fix. But it seems like some requirements for examples/seq2seq are missing? Namely, rouge-score, gitpython, sacrebleu. Is this intentional (conflicts with other example requirements)? Of course, would be happy to open a PR ## Environment info - `transformers` version: 8b884dadc6bd70600c98bb35b522beb0005e3f28 - Platform: Linux - Python version: 3.8 - PyTorch version (GPU?): GPU, '1.6.0+cu101' - Tensorflow version (GPU?): n/a - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No Summarization: @sshleifer ## Information Model I am using (Bert, XLNet ...): T5 The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) CNN Summary * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Create new `conda` env 2. Install requirements.txt in /examples 3. Try running finetune_t5.sh ## Expected behavior Finetune should run correctly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6892/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6891
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6891/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6891/comments
https://api.github.com/repos/huggingface/transformers/issues/6891/events
https://github.com/huggingface/transformers/issues/6891
690,469,104
MDU6SXNzdWU2OTA0NjkxMDQ=
6,891
AttributeError: 'DistilBertConfig' object has no attribute 'return_dict'
{ "login": "Y4rd13", "id": 13507900, "node_id": "MDQ6VXNlcjEzNTA3OTAw", "avatar_url": "https://avatars.githubusercontent.com/u/13507900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Y4rd13", "html_url": "https://github.com/Y4rd13", "followers_url": "https://api.github.com/users/Y4rd13/followers", "following_url": "https://api.github.com/users/Y4rd13/following{/other_user}", "gists_url": "https://api.github.com/users/Y4rd13/gists{/gist_id}", "starred_url": "https://api.github.com/users/Y4rd13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Y4rd13/subscriptions", "organizations_url": "https://api.github.com/users/Y4rd13/orgs", "repos_url": "https://api.github.com/users/Y4rd13/repos", "events_url": "https://api.github.com/users/Y4rd13/events{/privacy}", "received_events_url": "https://api.github.com/users/Y4rd13/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I don't see the transformers code that creates this bug. In 3.1.0, `DistilBertConfig` definitely has a 'return_dict' `attribute`. I tried to use your code to investigate the error, but it fails on the line `flair_sent = flair.models.TextClassifier.load('en-sentiment')` for me.\r\n\r\nHappy to investigate a code sample that uses transformers and creates the bug, but this looks like a problem to report on the fair GitHub. ", "> I don't see the transformers code that creates this bug. In 3.1.0, `DistilBertConfig` definitely has a 'return_dict' `attribute`. I tried to use your code to investigate the error, but it fails on the line `flair_sent = flair.models.TextClassifier.load('en-sentiment')` for me.\r\n> \r\n> Happy to investigate a code sample that uses transformers and creates the bug, but this looks like a problem to report on the fair GitHub.\r\n\r\nI already did, just in case I wanted to report the bug in here. Thank you anyway!", "Don't hesitate to reopen if it ends up being on our side, with a small repro using only transformers ideally.", "It ended being on flair side. Here I'll attached the link for future references [/flairNLP/flair/issues/1841](https://github.com/flairNLP/flair/issues/1841)" ]
1,598
1,599
1,599
NONE
null
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: <True> - Using distributed or parallel set-up in script?: <False> ### Who can help Trainer: @sgugger nlp datasets: [different repo](https://github.com/huggingface/nlp) Bart: @sshleifer examples/bert-loses-patience: @JetRunner examples/token-classification: @stefan-it ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) ```python def flair_lstm(text): sentence = flair.data.Sentence(text) flair_sent.predict(sentences=sentence) total_sent = sentence.labels for label in total_sent: value = label.value score = label.score return '1' if value == 'POSITIVE' else '-1' ``` The tasks I am working on is: * I'm working with flair to get classification polarities, but the issue seems refer to transformers ## To reproduce Steps to reproduce the behavior: 1. write ```python import flair flair_sent = flair.models.TextClassifier.load('en-sentiment') def flair_lstm(text): sentence = flair.data.Sentence(text) flair_sent.predict(sentences=sentence) total_sent = sentence.labels for label in total_sent: value = label.value score = label.score return '1' if value == 'POSITIVE' else '-1' df_test = "some test dataframe" df_test['flair'] = df_test['word'].apply(lambda x: flair_lstm(x)) ``` 2. See error: ## Traceback: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-26-1ee39d7138b3> in <module>() ----> 1 df_test['flair'] = df_test['word'].apply(lambda x: flair_lstm(x)) 10 frames pandas/_libs/lib.pyx in pandas._libs.lib.map_infer() /usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in use_return_dict(self) 217 """ 218 # If torchscript is set, force `return_dict=False` to avoid jit errors --> 219 return self.return_dict and not self.torchscript 220 221 @property AttributeError: 'DistilBertConfig' object has no attribute 'return_dict' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6891/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6890
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6890/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6890/comments
https://api.github.com/repos/huggingface/transformers/issues/6890/events
https://github.com/huggingface/transformers/pull/6890
690,423,009
MDExOlB1bGxSZXF1ZXN0NDc3MzEzOTM4
6,890
[Docs, Examples] Fix QA example for PT
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=h1) Report\n> Merging [#6890](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3726754a6c646adcf9cb2135ab7f72dffe074473?el=desc) will **decrease** coverage by `0.49%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6890/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6890 +/- ##\n==========================================\n- Coverage 80.05% 79.56% -0.50% \n==========================================\n Files 157 157 \n Lines 28822 28822 \n==========================================\n- Hits 23074 22932 -142 \n- Misses 5748 5890 +142 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-48.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.85% <0.00%> (-7.05%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=footer). Last update [3726754...eb044f1](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,599
1,599
MEMBER
null
Fixes #6738. @sgugger - PyTorch QA example was wrong IMO.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6890/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6890", "html_url": "https://github.com/huggingface/transformers/pull/6890", "diff_url": "https://github.com/huggingface/transformers/pull/6890.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6890.patch", "merged_at": 1599033190000 }
https://api.github.com/repos/huggingface/transformers/issues/6889
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6889/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6889/comments
https://api.github.com/repos/huggingface/transformers/issues/6889/events
https://github.com/huggingface/transformers/pull/6889
690,402,558
MDExOlB1bGxSZXF1ZXN0NDc3Mjk2NTg4
6,889
minor docs grammar fixes
{ "login": "harrywang", "id": 595772, "node_id": "MDQ6VXNlcjU5NTc3Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/595772?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harrywang", "html_url": "https://github.com/harrywang", "followers_url": "https://api.github.com/users/harrywang/followers", "following_url": "https://api.github.com/users/harrywang/following{/other_user}", "gists_url": "https://api.github.com/users/harrywang/gists{/gist_id}", "starred_url": "https://api.github.com/users/harrywang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harrywang/subscriptions", "organizations_url": "https://api.github.com/users/harrywang/orgs", "repos_url": "https://api.github.com/users/harrywang/repos", "events_url": "https://api.github.com/users/harrywang/events{/privacy}", "received_events_url": "https://api.github.com/users/harrywang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=h1) Report\n> Merging [#6889](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/311992630cfd6c776bc2672d94dcd81624ad023b?el=desc) will **increase** coverage by `0.64%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6889/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6889 +/- ##\n==========================================\n+ Coverage 79.06% 79.71% +0.64% \n==========================================\n Files 157 157 \n Lines 28823 28823 \n==========================================\n+ Hits 22789 22976 +187 \n+ Misses 6034 5847 -187 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `59.43% <0.00%> (-35.85%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `87.04% <0.00%> (-5.27%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-1.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.63% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=footer). Last update [3119926...e119e42](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,599
1,599
CONTRIBUTOR
null
just some minor document edits
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6889/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6889", "html_url": "https://github.com/huggingface/transformers/pull/6889", "diff_url": "https://github.com/huggingface/transformers/pull/6889.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6889.patch", "merged_at": 1599043520000 }
https://api.github.com/repos/huggingface/transformers/issues/6888
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6888/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6888/comments
https://api.github.com/repos/huggingface/transformers/issues/6888/events
https://github.com/huggingface/transformers/pull/6888
690,354,420
MDExOlB1bGxSZXF1ZXN0NDc3MjU2Mjc1
6,888
Create README.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=h1) Report\n> Merging [#6888](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/311992630cfd6c776bc2672d94dcd81624ad023b?el=desc) will **decrease** coverage by `0.84%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6888/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6888 +/- ##\n==========================================\n- Coverage 79.06% 78.22% -0.85% \n==========================================\n Files 157 157 \n Lines 28823 28823 \n==========================================\n- Hits 22789 22546 -243 \n- Misses 6034 6277 +243 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.63% <0.00%> (-54.32%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `87.04% <0.00%> (-5.27%)` | :arrow_down: |\n| ... and [21 more](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=footer). Last update [3119926...157d717](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks @mrm8488 , cc @dccuchile" ]
1,598
1,598
1,598
CONTRIBUTOR
null
Add language meta attribute <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6888/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6888", "html_url": "https://github.com/huggingface/transformers/pull/6888", "diff_url": "https://github.com/huggingface/transformers/pull/6888.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6888.patch", "merged_at": 1598994542000 }
https://api.github.com/repos/huggingface/transformers/issues/6887
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6887/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6887/comments
https://api.github.com/repos/huggingface/transformers/issues/6887/events
https://github.com/huggingface/transformers/pull/6887
690,353,701
MDExOlB1bGxSZXF1ZXN0NDc3MjU1Njk0
6,887
Create README.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
Add language meta attribute <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6887/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6887", "html_url": "https://github.com/huggingface/transformers/pull/6887", "diff_url": "https://github.com/huggingface/transformers/pull/6887.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6887.patch", "merged_at": 1598994550000 }
https://api.github.com/repos/huggingface/transformers/issues/6886
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6886/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6886/comments
https://api.github.com/repos/huggingface/transformers/issues/6886/events
https://github.com/huggingface/transformers/pull/6886
690,347,966
MDExOlB1bGxSZXF1ZXN0NDc3MjUxMDk5
6,886
Create README.md
{ "login": "abedkhooli", "id": 11407254, "node_id": "MDQ6VXNlcjExNDA3MjU0", "avatar_url": "https://avatars.githubusercontent.com/u/11407254?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abedkhooli", "html_url": "https://github.com/abedkhooli", "followers_url": "https://api.github.com/users/abedkhooli/followers", "following_url": "https://api.github.com/users/abedkhooli/following{/other_user}", "gists_url": "https://api.github.com/users/abedkhooli/gists{/gist_id}", "starred_url": "https://api.github.com/users/abedkhooli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abedkhooli/subscriptions", "organizations_url": "https://api.github.com/users/abedkhooli/orgs", "repos_url": "https://api.github.com/users/abedkhooli/repos", "events_url": "https://api.github.com/users/abedkhooli/events{/privacy}", "received_events_url": "https://api.github.com/users/abedkhooli/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=h1) Report\n> Merging [#6886](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/311992630cfd6c776bc2672d94dcd81624ad023b?el=desc) will **increase** coverage by `1.04%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6886/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6886 +/- ##\n==========================================\n+ Coverage 79.06% 80.10% +1.04% \n==========================================\n Files 157 157 \n Lines 28823 28823 \n==========================================\n+ Hits 22789 23089 +300 \n+ Misses 6034 5734 -300 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (+0.65%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.75%)` | :arrow_up: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=footer). Last update [3119926...0e167b5](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> model card for akhooli/xlm-r-large-arabic-sent
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6886/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6886", "html_url": "https://github.com/huggingface/transformers/pull/6886", "diff_url": "https://github.com/huggingface/transformers/pull/6886.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6886.patch", "merged_at": 1598994375000 }
https://api.github.com/repos/huggingface/transformers/issues/6885
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6885/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6885/comments
https://api.github.com/repos/huggingface/transformers/issues/6885/events
https://github.com/huggingface/transformers/pull/6885
690,329,322
MDExOlB1bGxSZXF1ZXN0NDc3MjM2MDgw
6,885
Create README.md
{ "login": "abedkhooli", "id": 11407254, "node_id": "MDQ6VXNlcjExNDA3MjU0", "avatar_url": "https://avatars.githubusercontent.com/u/11407254?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abedkhooli", "html_url": "https://github.com/abedkhooli", "followers_url": "https://api.github.com/users/abedkhooli/followers", "following_url": "https://api.github.com/users/abedkhooli/following{/other_user}", "gists_url": "https://api.github.com/users/abedkhooli/gists{/gist_id}", "starred_url": "https://api.github.com/users/abedkhooli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abedkhooli/subscriptions", "organizations_url": "https://api.github.com/users/abedkhooli/orgs", "repos_url": "https://api.github.com/users/abedkhooli/repos", "events_url": "https://api.github.com/users/abedkhooli/events{/privacy}", "received_events_url": "https://api.github.com/users/abedkhooli/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "@sshleifer want to update the inference API so that the correct pipeline shows up at https://huggingface.co/akhooli/mbart-large-cc25-en-ar ? (cc @mfuntowicz)", "Seems fixed?\r\nhttps://huggingface.co/akhooli/mbart-large-cc25-en-ar\r\n![image](https://user-images.githubusercontent.com/6045025/92133236-22273300-edd6-11ea-866e-d7249f38f792.png)\r\n", "> \r\n> \r\n> Seems fixed?\r\n> https://huggingface.co/akhooli/mbart-large-cc25-en-ar\r\n> ![image](https://user-images.githubusercontent.com/6045025/92133236-22273300-edd6-11ea-866e-d7249f38f792.png)\r\nSure, just after the model card was merged. Not sure if it was due to the 'translation' tag in the card or some other magic done by your team.", "Just uploaded https://huggingface.co/akhooli/mbart-large-cc25-ar-en and it seems inference type is not recognized automatically. It defaults to fill-mask (model card submitted).", "model card merged." ]
1,598
1,599
1,598
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Model card for akhooli/mbart-large-cc25-en-ar
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6885/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6885", "html_url": "https://github.com/huggingface/transformers/pull/6885", "diff_url": "https://github.com/huggingface/transformers/pull/6885.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6885.patch", "merged_at": 1598993869000 }
https://api.github.com/repos/huggingface/transformers/issues/6884
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6884/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6884/comments
https://api.github.com/repos/huggingface/transformers/issues/6884/events
https://github.com/huggingface/transformers/pull/6884
690,323,429
MDExOlB1bGxSZXF1ZXN0NDc3MjMxMjE1
6,884
[Electra] fix warning for position ids
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=h1) Report\n> Merging [#6884](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3726754a6c646adcf9cb2135ab7f72dffe074473?el=desc) will **decrease** coverage by `0.22%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6884/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6884 +/- ##\n==========================================\n- Coverage 80.05% 79.83% -0.23% \n==========================================\n Files 157 157 \n Lines 28822 28823 +1 \n==========================================\n- Hits 23074 23010 -64 \n- Misses 5748 5813 +65 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `82.18% <100.00%> (+0.05%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.32%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=footer). Last update [3726754...8f97406](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,599
1,599
MEMBER
null
<!-- This line specifies which issue to close after the pull request is merged. --> ~Fixes 6882~ (might only be partly)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6884/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6884/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6884", "html_url": "https://github.com/huggingface/transformers/pull/6884", "diff_url": "https://github.com/huggingface/transformers/pull/6884.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6884.patch", "merged_at": 1599043492000 }
https://api.github.com/repos/huggingface/transformers/issues/6883
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6883/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6883/comments
https://api.github.com/repos/huggingface/transformers/issues/6883/events
https://github.com/huggingface/transformers/pull/6883
690,317,705
MDExOlB1bGxSZXF1ZXN0NDc3MjI2NDc0
6,883
Create README.md
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,598
1,598
MEMBER
null
Adds model card for Longformer2Roberta
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6883/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6883", "html_url": "https://github.com/huggingface/transformers/pull/6883", "diff_url": "https://github.com/huggingface/transformers/pull/6883.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6883.patch", "merged_at": 1598981086000 }
https://api.github.com/repos/huggingface/transformers/issues/6882
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6882/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6882/comments
https://api.github.com/repos/huggingface/transformers/issues/6882/events
https://github.com/huggingface/transformers/issues/6882
690,316,481
MDU6SXNzdWU2OTAzMTY0ODE=
6,882
Bert Checkpoint Breaks 3.02 -> 3.1.0 due to new buffer in BertEmbeddings
{ "login": "Laksh1997", "id": 59830552, "node_id": "MDQ6VXNlcjU5ODMwNTUy", "avatar_url": "https://avatars.githubusercontent.com/u/59830552?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Laksh1997", "html_url": "https://github.com/Laksh1997", "followers_url": "https://api.github.com/users/Laksh1997/followers", "following_url": "https://api.github.com/users/Laksh1997/following{/other_user}", "gists_url": "https://api.github.com/users/Laksh1997/gists{/gist_id}", "starred_url": "https://api.github.com/users/Laksh1997/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Laksh1997/subscriptions", "organizations_url": "https://api.github.com/users/Laksh1997/orgs", "repos_url": "https://api.github.com/users/Laksh1997/repos", "events_url": "https://api.github.com/users/Laksh1997/events{/privacy}", "received_events_url": "https://api.github.com/users/Laksh1997/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I understand it makes the code slightly cleaner; in terms of speed it is most likely negligible (compared to the embedding lookup, for example).\r\n\r\nBut not sure what to do now as all the pretrained models (that used a lot of compute to pretrain) don't work anymore in the new update.", "Hey @Laksh1997 - note that this line does not break anything. You can neglect warnings about `position_ids` since those are created at instantiation. Will open a PR to fix the warning", "@patrickvonplaten seems to break it for me:\r\n\r\n```\r\n\r\n16:43:52\r\nTraceback (most recent call last):\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/bin/transformervae\", line 33, in <module>\r\n\r\n16:43:52\r\nsys.exit(load_entry_point('exs-transformervae', 'console_scripts', 'transformervae')())\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/click/core.py\", line 829, in __call__\r\n\r\n16:43:52\r\nreturn self.main(*args, **kwargs)\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/click/core.py\", line 782, in main\r\n\r\n16:43:52\r\nrv = self.invoke(ctx)\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/click/core.py\", line 1259, in invoke\r\n\r\n16:43:52\r\nreturn _process_result(sub_ctx.command.invoke(sub_ctx))\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/click/core.py\", line 1066, in invoke\r\n\r\n16:43:52\r\nreturn ctx.invoke(self.callback, **ctx.params)\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/click/core.py\", line 610, in invoke\r\n\r\n16:43:52\r\nreturn callback(*args, **kwargs)\r\n\r\n16:43:52\r\nFile \"/app/transformervae/cli.py\", line 355, in train\r\n\r\n16:43:52\r\nmodel = model_cls(hparams, pretrained_model=pretrained_model_path_or_config)\r\n\r\n16:43:52\r\nFile \"/app/transformervae/models/regression.py\", line 35, in __init__\r\n\r\n16:43:52\r\npretrained_model,\r\n\r\n16:43:52\r\nFile \"/app/transformervae/models/finetuning_model.py\", line 37, in __init__\r\n\r\n16:43:52\r\nself.encoder, self.tokenizer = self.load_pretrained_encoder(pretrained_model)\r\n\r\n16:43:52\r\nFile \"/app/transformervae/models/finetuning_model.py\", line 89, in load_pretrained_encoder\r\n\r\n16:43:52\r\npl_model = AutoModel.load(pretrained_model)\r\n\r\n16:43:52\r\nFile \"/app/transformervae/models/automodel.py\", line 98, in load\r\n\r\n16:43:52\r\nreturn model_cls.load(path)\r\n\r\n16:43:52\r\nFile \"/app/transformervae/models/base.py\", line 229, in load\r\n\r\n16:43:52\r\nreturn cls.load_from_checkpoint(filepath)\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/core/saving.py\", line 169, in load_from_checkpoint\r\n\r\n16:43:52\r\nmodel = cls._load_model_state(checkpoint, *args, **kwargs)\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/core/saving.py\", line 207, in _load_model_state\r\n\r\n16:43:52\r\nmodel.load_state_dict(checkpoint['state_dict'])\r\n\r\n16:43:52\r\nFile \"/opt/conda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 1045, in load_state_dict\r\n\r\n16:43:52\r\nself.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\n\r\n16:43:52\r\nRuntimeError: Error(s) in loading state_dict for ElectraLanguageModel:\r\n\r\n16:43:52\r\nMissing key(s) in state_dict: \"generator_model.electra.embeddings.position_ids\", \"discriminator_model.electra.embeddings.position_ids\".\r\n```", "Note, `generator_model.electra` is `ElectraModel`, which uses `BertEmbeddings`.", "Can you send me a code snippet so that I can reproduce your error? \r\n", "It's a big library. But I can try to recreate in a Colab. One sec.", "@patrickvonplaten Colab: https://colab.research.google.com/drive/167CwTImG5T-4c9xeIVEkH9Xrracbn30h?usp=sharing\r\n\r\nLet me know if you can access?", "It also breaks to me. The attribute embedding.position_ids can't be loaded if the model artifact is trained with v3.0.2. So it will raise an KeyError", "Hey @Laksh1997, I can't access the notebook - could you make it public for everybody to see? :-) ", "@patrickvonplaten apologies. Here is the script:\r\n\r\n```python\r\n!pip install transformers==3.0.2\r\n\r\nfrom transformers import ElectraModel, ElectraConfig\r\nimport torch\r\nimport transformers\r\n\r\nprint(transformers.__version__)\r\n\r\nmodel = ElectraModel(ElectraConfig())\r\nstate_dict = model.state_dict()\r\ntorch.save(state_dict, 'checkpoint.pt')\r\n```\r\n\r\n```python\r\n!pip install transformers==3.1.0\r\n\r\nfrom transformers import ElectraModel, ElectraConfig\r\nimport torch\r\nimport transformers\r\n\r\nprint(transformers.__version__)\r\n\r\nmodel = ElectraModel(ElectraConfig())\r\nstate_dict = torch.load('checkpoint.pt')\r\nmodel.load_state_dict(state_dict)\r\n\r\n```", "I encountered the same issue. Old checkpoints (3.0.2) can not be loaded in (3.1.0) due to KeyError.", "@Barcavin @easonnie As a temporary fix, I've just reverted back to 3.0.2. @patrickvonplaten I am hoping something can be done !", "Hi, while we work on patching this issue, you can still use version v3.1.0 by using the `from_pretrained` method. Taking @Laksh1997's example, you would do:\r\n\r\n1. Save the checkpoint in `saved_model_location/pytorch_model.bin`\r\n\r\n```py\r\nfrom transformers import ElectraModel, ElectraConfig\r\nimport torch\r\nimport transformers\r\n\r\nprint(transformers.__version__)\r\n\r\nmodel = ElectraModel(ElectraConfig())\r\nstate_dict = model.state_dict()\r\ntorch.save(state_dict, 'saved_model_location/pytorch_model.bin')\r\n```\r\n\r\n2. Load it using the method `.from_pretrained`\r\n\r\n```py\r\nfrom transformers import ElectraModel, ElectraConfig\r\nimport transformers\r\n\r\nprint(transformers.__version__)\r\n\r\nmodel = ElectraModel.from_pretrained(\"saved_model_location\", config=ElectraConfig())\r\n``` ", "You can also use the `load_state_dict` method with the `strict` option set to `False`:\r\n\r\n```py\r\nmodel.load_state_dict(state_dict, strict=False)\r\n```", "The reason this additional buffer is here now is due to this [PR](https://github.com/huggingface/transformers/pull/5773#issue-449530988). \r\n\r\nIs there a reason why you would use the `load_state_dict` instead of `from_pretrained`, as `from_pretrained` exists in part to prevent such issues from happening?", "Hi @LysandreJik \r\n\r\nThanks for the proposed solution. \r\n\r\nIn my case, I am using Pytorch Lightning which has its own saving and loading infrastructure. Thus the `from_pretrained` method can't exactly be used.\r\n\r\nThe `strict` flag is a good patch for now.\r\n\r\nI think, in general, when building on top of the library, for complex projects one cannot rely on `from_pretrained`, especially if using other ecosystems.", "Using the `strict` flag can enable a number of errors to go undetected, so I would refrain from using it. I think the best solution is to use version 3.0.2 for already trained models until the fix comes out.", "Any update on this @LysandreJik @patrickvonplaten ?", "As the `torch.load` method in `strict` mode does not allow unexpected/missing keys, this is an issue that won't be resolved. Three options are available here:\r\n- Use the recommended `from_pretrained` method, which exists specifically to work around this kind of issues\r\n- Use the `torch.load` method with the `strict` flag set to `False`\r\n- Pin to version v3.0.2 if none of these can be applied.\r\n\r\nMinor changes in model infrastructure can unfortunately happen as we try to optimize for efficiency, which will lead to this kind of issues. We're internally working on having our models on the hub be versionable, which should solve most of these problems. It's at least a couple of months away, however.", "@LysandreJik That is unfortunate that the library will probably have to be pinned, as the first two options are unviable for reasons described in this thread. Especially because pretraining large models is computationally quite expensive (100s of GPU hours)...", "You can also use the work-around explained [here](https://github.com/huggingface/transformers/issues/6882#issuecomment-685509938) if you want to convert your weights to the updated architecture.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Just wanted to add that there is another non-trivial reason why `from_pretrained` might not be useful in all cases: fine-tuning. If I fine-tune BERT's weights on a specific dataset, most likely I will have to use `load_state_dict` afterwards to use the new weights, rather than the original weights that `from_pretrained` would load.", "@LysandreJik @Laksh1997 Setting the [persistent flag ](https://pytorch.org/docs/master/generated/torch.jit.ScriptModule.html#torch.jit.ScriptModule.register_buffer)to False when registering the buffer will avoid adding it to the state_dict and can address the BC issue. ", "Hello there, \r\n\r\nI encountered the same problem. I was using transformers version 4.7.0; but the checkpoint was trained with transformer 3.0.2. I just did `pip uninstall transformers`, and then `pip install transformers==3.0.2` for running the training. Presumably, you can try: `model.load_state_dict(state_dict, strict=False)` as well. However, I don't feel comfortable with the latter solution since I think that might affect the model performance in abstraction –since `position_ids` **might** be used by the model, and putting some random values when it's not present in pre-trained checkpoint might ruin the performance. So safer way is to down-grade the transformers, in my opinion. \r\n\r\nHope this helps you out!", "Can someone confirm if the `position_ids` are used by the model and by not loading it correctly would it affect the performance of the model in transfer learning or continuing to train or inference? Thank you", "I think it's safe to use `model.load_state_dict(state_dict, strict=False)` if the only missing information is the `position_ids` buffer. This tensor is indeed used by the model, but it's just a constant tensor containing a list of integers from 0 to the maximum number of position embeddings. The tensor is first created in the constructor of the `BertEmbeddings` class, in this line:\r\n\r\nhttps://github.com/huggingface/transformers/blob/fcf83011dffce3f2e8aad906f07c1ec14668f877/src/transformers/models/bert/modeling_bert.py#L182\r\n\r\nAs such, it's not really part of the optimizable parameters of the model. This means that it doesn't matter if `position_ids` is not available when calling `load_state_dict`, because the line above will create it anyway in the constructor with the required values.", "> I think it's safe to use `model.load_state_dict(state_dict, strict=False)` if the only missing information is the `position_ids` buffer. This tensor is indeed used by the model, but it's just a constant tensor containing a list of integers from 0 to the maximum number of position embeddings. The tensor is first created in the constructor of the `BertEmbeddings` class, in this line:\r\n> \r\n> https://github.com/huggingface/transformers/blob/fcf83011dffce3f2e8aad906f07c1ec14668f877/src/transformers/models/bert/modeling_bert.py#L182\r\n> \r\n> As such, it's not really part of the optimizable parameters of the model. This means that it doesn't matter if `position_ids` is not available when calling `load_state_dict`, because the line above will create it anyway in the constructor with the required values.\r\n\r\nThank you very much @dfdazac for your detailed reply. " ]
1,598
1,626
1,608
NONE
null
Hi, Thanks for the great library. I noticed this line being added (https://github.com/huggingface/transformers/blob/v3.1.0/src/transformers/modeling_bert.py#L190) in the latest update. It breaks checkpoints that were saved when this line wasn't there. ``` Missing key(s) in state_dict: "generator_model.electra.embeddings.position_ids", "discriminator_model.electra.embeddings.position_ids". ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6882/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6882/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6881
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6881/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6881/comments
https://api.github.com/repos/huggingface/transformers/issues/6881/events
https://github.com/huggingface/transformers/issues/6881
690,275,260
MDU6SXNzdWU2OTAyNzUyNjA=
6,881
'BertEmbeddings' object has no attribute 'bias' while converting tf checkpoint
{ "login": "blueberry-cake", "id": 52150530, "node_id": "MDQ6VXNlcjUyMTUwNTMw", "avatar_url": "https://avatars.githubusercontent.com/u/52150530?v=4", "gravatar_id": "", "url": "https://api.github.com/users/blueberry-cake", "html_url": "https://github.com/blueberry-cake", "followers_url": "https://api.github.com/users/blueberry-cake/followers", "following_url": "https://api.github.com/users/blueberry-cake/following{/other_user}", "gists_url": "https://api.github.com/users/blueberry-cake/gists{/gist_id}", "starred_url": "https://api.github.com/users/blueberry-cake/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/blueberry-cake/subscriptions", "organizations_url": "https://api.github.com/users/blueberry-cake/orgs", "repos_url": "https://api.github.com/users/blueberry-cake/repos", "events_url": "https://api.github.com/users/blueberry-cake/events{/privacy}", "received_events_url": "https://api.github.com/users/blueberry-cake/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It was just the naming of \"layer_norm\" instead of \"LayerNorm\" I changed the script and now it works.", "@blueberry-cake which script was that naming in? ", "@blueberry-cake could you tell me the details of how you solve this problem? I have this problem, too,I do not understand the word \"It was just the naming of \"layer_norm\" instead of \"LayerNorm\" I changed the script and now it works.\" Thanks for your help in advance!", "Hi, I encountered the same problem. I spent quite a while googling online but didn't get a solution. Could you please let me know if you get the solution? @blueberry-cake @roxannemiller @ankunw ", "> Hi, I encountered the same problem. I spent quite a while googling online but didn't get a solution. Could you please let me know if you get the solution? @blueberry-cake @roxannemiller @ankunw\r\n\r\nmaybe you could use he latest transformer have a try", "No it still doesn't work. Sign :(", "So I solved this problem with other people's help. Basically, I need to change the key name in my tf1 checkpoints. Here is the code. For further details, please see: https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/migrating_checkpoints.ipynb?hl=id#scrollTo=NPQsXQveuQiC\r\n\r\n```\r\nimport re\r\ndef change_name(checkpoint_path, output_prefix):\r\n ckpt = tf.train.Checkpoint(vars={name: variable}) \r\n ckpt.restore(converted_ckpt_path)\r\n \"\"\"\r\n Args:\r\n checkpoint_path: Path to the TF1 checkpoint.\r\n output_prefix: Path prefix to the converted checkpoint.\r\n\r\n Returns:\r\n Path to the converted checkpoint.\r\n \"\"\"\r\n vars = {}\r\n reader = tf.train.load_checkpoint(checkpoint_path)\r\n dtypes = reader.get_variable_to_dtype_map()\r\n\r\n for key in dtypes.keys():\r\n new_key = key\r\n if key=='bert/embeddings/layer_normalization/beta' or key=='bert/embeddings/layer_normalization/gamma':\r\n new_key=key.replace('layer_normalization','LayerNorm')\r\n elif re.search('layer_normalization_+\\d+',key):\r\n new_key = re.sub('layer_normalization_+\\d+','LayerNorm',key)\r\n elif re.search('layer_normalization',key):\r\n new_key = re.sub('layer_normalization','LayerNorm',key)\r\n print(new_key)\r\n vars[new_key] = tf.Variable(reader.get_tensor(key))\r\n \r\n return tf1.train.Saver(var_list=vars).save(sess=None, save_path=output_prefix)", "> So I solved this problem with other people's help. Basically, I need to change the key name in my tf1 checkpoints. Here is the code. For further details, please see: https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/migrating_checkpoints.ipynb?hl=id#scrollTo=NPQsXQveuQiC\r\n> \r\n> ```\r\n> import re\r\n> def change_name(checkpoint_path, output_prefix):\r\n> ckpt = tf.train.Checkpoint(vars={name: variable}) \r\n> ckpt.restore(converted_ckpt_path)\r\n> \"\"\"\r\n> Args:\r\n> checkpoint_path: Path to the TF1 checkpoint.\r\n> output_prefix: Path prefix to the converted checkpoint.\r\n> \r\n> Returns:\r\n> Path to the converted checkpoint.\r\n> \"\"\"\r\n> vars = {}\r\n> reader = tf.train.load_checkpoint(checkpoint_path)\r\n> dtypes = reader.get_variable_to_dtype_map()\r\n> \r\n> for key in dtypes.keys():\r\n> new_key = key\r\n> if key=='bert/embeddings/layer_normalization/beta' or key=='bert/embeddings/layer_normalization/gamma':\r\n> new_key=key.replace('layer_normalization','LayerNorm')\r\n> elif re.search('layer_normalization_+\\d+',key):\r\n> new_key = re.sub('layer_normalization_+\\d+','LayerNorm',key)\r\n> elif re.search('layer_normalization',key):\r\n> new_key = re.sub('layer_normalization','LayerNorm',key)\r\n> print(new_key)\r\n> vars[new_key] = tf.Variable(reader.get_tensor(key))\r\n> \r\n> return tf1.train.Saver(var_list=vars).save(sess=None, save_path=output_prefix)\r\n> ```\r\n\r\nDear friend, is there a complete integration of your code in \"convert_bert_original_tf_checkpoint_to_pytorch.py\"? I don't know how to adjust it using your code." ]
1,598
1,665
1,599
NONE
null
When trying to convert the checkpoint of a self pre-trained tensorflow BERT model (using the **[create-pretraining.py][1]** script from google) into a pytorch model using **convert_bert_original_tf_checkpoint_to_pytorch.py** I always end up with the following error: **AttributeError: 'BertEmbeddings' object has no attribute 'bias'** The init_vars names (just the first ones) look like this: ``` ['bert/embeddings/layer_normalization/beta', 'bert/embeddings/layer_normalization/beta/adam_m', 'bert/embeddings/layer_normalization/beta/adam_v', 'bert/embeddings/layer_normalization/gamma', 'bert/embeddings/layer_normalization/gamma/adam_m', 'bert/embeddings/layer_normalization/gamma/adam_v'] ``` Code that produces the error looks like this: ``` for m_name in name: if re.fullmatch(r"[A-Za-z]+_\d+", m_name): scope_names = re.split(r"_(\d+)", m_name) else: scope_names = [m_name] if scope_names[0] == "kernel" or scope_names[0] == "gamma": pointer = getattr(pointer, "weight") elif scope_names[0] == "output_bias" or scope_names[0] == "beta": print(scope_names) pointer = getattr(pointer, "bias") elif scope_names[0] == "output_weights": pointer = getattr(pointer, "weight") elif scope_names[0] == "squad": pointer = getattr(pointer, "classifier") else: try: pointer = getattr(pointer, scope_names[0]) except AttributeError: logger.info("Skipping {}".format("/".join(name))) ``` Going through all the names and getting the right attributes from the model. When it comes to the Layer Normalization in the BertEmbeddings the script produces an error. Did anyone else encouter that error before? How did you fix this? Did I mix something up with the tensorflow versions? Thanks for your help in advance! Here again the whole stacktrace: ``` Traceback (most recent call last): File "convert_bert_original_tf_checkpoint_to_pytorch.py", line 62, in <module> convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.bert_config_file, args.pytorch_dump_path) File "convert_bert_original_tf_checkpoint_to_pytorch.py", line 37, in convert_tf_checkpoint_to_pytorch load_tf_weights_in_bert(model, config, tf_checkpoint_path) File "/modeling_bert.py", line 136, in load_tf_weights_in_bert pointer = getattr(pointer, "bias") File "module.py", line 594, in __getattr__ type(self).__name__, name)) AttributeError: 'BertEmbeddings' object has no attribute 'bias' ``` Bert Config is the following: ``` Building PyTorch model from configuration: BertConfig { "attention_probs_dropout_prob": 0.1, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 512, "initializer_range": 0.02, "intermediate_size": 2048, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 8, "num_hidden_layers": 8, "pad_token_id": 0, "type_vocab_size": 2, "vocab_size": 30522 } ``` [1]: https://github.com/google-research/bert/blob/master/run_pretraining.py
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6881/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6880
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6880/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6880/comments
https://api.github.com/repos/huggingface/transformers/issues/6880/events
https://github.com/huggingface/transformers/pull/6880
690,247,906
MDExOlB1bGxSZXF1ZXN0NDc3MTY4NzU3
6,880
Fix TF Trainer for TPU
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I have added a model_init function in the PyTorch Trainer to support hp-search. Is it possible to use this instead of changing the `args`? This would make a very big difference between the PT Trainer and TF Trainer.", "OK, I will check this. Thanks.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hello did you have any success fixing this? Can I help? I'm on a tight art student collegiate budget and tpu speed would be awesome if not necessary. I've spent like 20-30 hours on fixing the tpu issue myself and no luck. Any help getting run_clm.py on a tpu so I can quickly iterate would be awesome. But generally I'd love to move mainly to tpu but I'm not sure its there yet. New to open source really want to learn as much as possible. Can I help?", "@arccoxx This PR should be closed because we have identified two different issues:\r\n\r\n1. The first one don't come from Transformers but from TensorFlow. To make it short, TPU don't handle `tf.data.Dataset.from_generator`, Google is currently working on it and we have to wait they release the fix once they have one.\r\n2. Currently you cannot train a LM from scratch with any TF model. We are currently working on this, and it will be possible in our next release.\r\n\r\nSo for your project the best solution would be to use the PyTorch version that works on TPU and you can train from scratch any LM model.", "> @arccoxx This PR should be closed because we have identified two different issues:\r\n> \r\n> 1. The first one don't come from Transformers but from TensorFlow. To make it short, TPU don't handle `tf.data.Dataset.from_generator`, Google is currently working on it and we have to wait they release the fix once they have one.\r\n> 2. Currently you cannot train a LM from scratch with any TF model. We are currently working on this, and it will be possible in our next release.\r\n> \r\n> So for your project the best solution would be to use the PyTorch version that works on TPU and you can train from scratch any LM model.\r\n\r\nI was not able to get any pytorch version to run on xla. Is there any reference notebook that could be linked? I tried finetuning in native pytorch, running (pytorch) tuner, run_language_modeling with multiple transformers library versions 2.1.0-2.9.1, and run_clm with 3.4.0 all with no luck. Ive also tried building a pytorch lightning module and no luck. As the speedup would be that helpful (provided it can handle gpt2 medium) it would be awesome to figure out reduce these compatibility issues. My hope is to then use the tpu in a more complicated model that will use this fine tuned model. Any help would be super appreciated. Thank you!" ]
1,598
1,604
1,604
CONTRIBUTOR
null
This PR try to fix the trainer for TensorFlow by updating the order of some steps: 1. Dataset preprocessing 2. Strategy creation 3. Model creation Instead of 1. Strategy creation 2. Model creation 3. Dataset preprocessing Fixes #6672
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6880/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6880", "html_url": "https://github.com/huggingface/transformers/pull/6880", "diff_url": "https://github.com/huggingface/transformers/pull/6880.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6880.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6879
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6879/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6879/comments
https://api.github.com/repos/huggingface/transformers/issues/6879/events
https://github.com/huggingface/transformers/pull/6879
690,229,296
MDExOlB1bGxSZXF1ZXN0NDc3MTUzMjgw
6,879
Add cache_dir to save features TextDataset
{ "login": "jysohn23", "id": 19496130, "node_id": "MDQ6VXNlcjE5NDk2MTMw", "avatar_url": "https://avatars.githubusercontent.com/u/19496130?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jysohn23", "html_url": "https://github.com/jysohn23", "followers_url": "https://api.github.com/users/jysohn23/followers", "following_url": "https://api.github.com/users/jysohn23/following{/other_user}", "gists_url": "https://api.github.com/users/jysohn23/gists{/gist_id}", "starred_url": "https://api.github.com/users/jysohn23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jysohn23/subscriptions", "organizations_url": "https://api.github.com/users/jysohn23/orgs", "repos_url": "https://api.github.com/users/jysohn23/repos", "events_url": "https://api.github.com/users/jysohn23/events{/privacy}", "received_events_url": "https://api.github.com/users/jysohn23/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the quick reviews @LysandreJik and @sgugger! And yep updating black version did it (I think) thanks." ]
1,598
1,598
1,598
COLLABORATOR
null
This is in case the dataset is in a RO filesystem, for which is the case in tests (GKE TPU tests).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6879/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6879", "html_url": "https://github.com/huggingface/transformers/pull/6879", "diff_url": "https://github.com/huggingface/transformers/pull/6879.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6879.patch", "merged_at": 1598974938000 }
https://api.github.com/repos/huggingface/transformers/issues/6878
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6878/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6878/comments
https://api.github.com/repos/huggingface/transformers/issues/6878/events
https://github.com/huggingface/transformers/pull/6878
690,181,738
MDExOlB1bGxSZXF1ZXN0NDc3MTEzNTg5
6,878
[EncoderDecoder] Add xlm-roberta to encoder decoder
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=h1) Report\n> Merging [#6878](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3726754a6c646adcf9cb2135ab7f72dffe074473?el=desc) will **decrease** coverage by `3.21%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6878/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6878 +/- ##\n==========================================\n- Coverage 80.05% 76.84% -3.22% \n==========================================\n Files 157 157 \n Lines 28822 28825 +3 \n==========================================\n- Hits 23074 22150 -924 \n- Misses 5748 6675 +927 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <ø> (-14.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `100.00% <100.00%> (ø)` | |\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `16.25% <0.00%> (-63.52%)` | :arrow_down: |\n| [src/transformers/configuration\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3RyYW5zZm9feGwucHk=) | `27.27% <0.00%> (-61.82%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `6.71% <0.00%> (-59.71%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.63% <0.00%> (-54.32%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| ... and [24 more](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=footer). Last update [3726754...06cc500](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "cc @laibamehnaz ", "> cc @laibamehnaz\r\n\r\nThank you :)" ]
1,598
1,598
1,598
MEMBER
null
This PR adds `XLM-Roberta` to the EncoderDecoder framework by adding a new `XLMRobertaForCausalLM` to the models. The XLM-Roberta EncoderDecoder can be used as follows: ```python from transformers import EncoderDecoderModel import torch model = EncoderDecoderModel.from_encoder_decoder_pretrained("xlm-roberta-base", "xlm-roberta-base") input_ids = torch.tensor([10 * [0]]) outputs = model(input_ids, decoder_input_ids=input_ids, labels=input_ids, return_dict=True) print("Loss", outputs.loss) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6878/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6878", "html_url": "https://github.com/huggingface/transformers/pull/6878", "diff_url": "https://github.com/huggingface/transformers/pull/6878.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6878.patch", "merged_at": 1598990200000 }
https://api.github.com/repos/huggingface/transformers/issues/6877
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6877/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6877/comments
https://api.github.com/repos/huggingface/transformers/issues/6877/events
https://github.com/huggingface/transformers/pull/6877
690,061,258
MDExOlB1bGxSZXF1ZXN0NDc3MDEzMTg3
6,877
[WIP, TF] replace keras dense by keras.layers.DenseEinsum
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=h1) Report\n> Merging [#6877](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a32d85f0d405be53117b96075eef2875d2185892?el=desc) will **increase** coverage by `0.16%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6877/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6877 +/- ##\n==========================================\n+ Coverage 80.48% 80.65% +0.16% \n==========================================\n Files 157 157 \n Lines 28794 28796 +2 \n==========================================\n+ Hits 23175 23224 +49 \n+ Misses 5619 5572 -47 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `86.63% <0.00%> (-5.27%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.97% <0.00%> (-0.68%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=footer). Last update [a32d85f...ddbccd8](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "As a comparision. When running this line on current `master`:\r\n```\r\nTF_CPP_MIN_LOG_LEVEL=3 python examples/benchmarking/run_benchmark_tf.py --models bert-base-uncased --no_memory --batch_sizes 1 --sequence_lengths 128 256 512\r\n```\r\n\r\none gets the following results: \r\n\r\n```\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n bert-base-uncased 1 128 0.006 \r\n bert-base-uncased 1 256 0.009 \r\n bert-base-uncased 1 512 0.017 \r\n--------------------------------------------------------------------------------\r\n\r\n==================== ENVIRONMENT INFORMATION ====================\r\n- transformers_version: 3.0.2\r\n- framework: TensorFlow\r\n- eager_mode: False\r\n- use_xla: False\r\n- framework_version: 2.3.0\r\n- python_version: 3.6.10\r\n- system: Linux\r\n- cpu: x86_64\r\n- architecture: 64bit\r\n- date: 2020-09-01\r\n- time: 11:23:30.836691\r\n- fp16: False\r\n- use_multiprocessing: True\r\n- only_pretrain_model: False\r\n- cpu_ram_mb: 32088\r\n- use_gpu: True\r\n- num_gpus: 1\r\n- gpu: TITAN RTX\r\n- gpu_ram_mb: 24217\r\n- gpu_power_watts: 280.0\r\n- gpu_performance_state: 2\r\n- use_tpu: False\r\n```\r\n\r\nfor a TITAN RTX GPU.\r\n\r\nWhen running the above line on this branch, one gets the following results:\r\n\r\n```\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n bert-base-uncased 1 128 0.006 \r\n bert-base-uncased 1 256 0.008 \r\n bert-base-uncased 1 512 0.016 \r\n--------------------------------------------------------------------------------\r\n\r\n==================== ENVIRONMENT INFORMATION ====================\r\n- transformers_version: 3.0.2\r\n- framework: TensorFlow\r\n- eager_mode: False\r\n- use_xla: False\r\n- framework_version: 2.3.0\r\n- python_version: 3.6.10\r\n- system: Linux\r\n- cpu: x86_64\r\n- architecture: 64bit\r\n- date: 2020-09-01\r\n- time: 11:28:12.021389\r\n- fp16: False\r\n- use_multiprocessing: True\r\n- only_pretrain_model: False\r\n- cpu_ram_mb: 32088\r\n- use_gpu: True\r\n- num_gpus: 1\r\n- gpu: TITAN RTX\r\n- gpu_ram_mb: 24217\r\n- gpu_power_watts: 280.0\r\n- gpu_performance_state: 2\r\n- use_tpu: False\r\n```\r\n\r\nSo, I cannot see a real difference here :-/ @jlei2", "I will see whether the benchmark results are better on a GPU-V100 GPU. \r\n@jlei2 - could you post the code you used to benchmark HF Bert vs. Google Bert ? This would help a lot for reproducability.", "oops didn't see it was [WIP]", "@patrickvonplaten can you update your code with the version given by @jlei2 [here](https://github.com/jlei2/transformers/pull/2) when you have time please. Thanks a lot!" ]
1,598
1,603
1,603
MEMBER
null
Fixes #6771. This PR might speed up TF runtime on GPU. Note that the change requires TF 2.3.0
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6877/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6877", "html_url": "https://github.com/huggingface/transformers/pull/6877", "diff_url": "https://github.com/huggingface/transformers/pull/6877.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6877.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6876
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6876/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6876/comments
https://api.github.com/repos/huggingface/transformers/issues/6876/events
https://github.com/huggingface/transformers/issues/6876
690,024,585
MDU6SXNzdWU2OTAwMjQ1ODU=
6,876
[TF T5] Possible Error using TF T5 with Keras
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been stale for 1 month." ]
1,598
1,618
1,618
MEMBER
null
See: https://discuss.huggingface.co/t/how-to-train-tft5forconditionalgeneration-model/888.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6876/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6875
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6875/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6875/comments
https://api.github.com/repos/huggingface/transformers/issues/6875/events
https://github.com/huggingface/transformers/pull/6875
689,987,075
MDExOlB1bGxSZXF1ZXN0NDc2OTUwOTM2
6,875
Restore PaddingStrategy.MAX_LENGTH on QAPipeline while no v2.
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,598
1,598
MEMBER
null
QA Pipeline will be slower but will work in all situations. Need to shift towards pipeline v2 with priority on QA Pipeline to provide a workaround. Signed-off-by: Morgan Funtowicz <[email protected]>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6875/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6875", "html_url": "https://github.com/huggingface/transformers/pull/6875", "diff_url": "https://github.com/huggingface/transformers/pull/6875.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6875.patch", "merged_at": 1598952936000 }
https://api.github.com/repos/huggingface/transformers/issues/6874
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6874/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6874/comments
https://api.github.com/repos/huggingface/transformers/issues/6874/events
https://github.com/huggingface/transformers/issues/6874
689,974,271
MDU6SXNzdWU2ODk5NzQyNzE=
6,874
gradient_accumulation_steps in trainer_tf
{ "login": "krislc", "id": 4952629, "node_id": "MDQ6VXNlcjQ5NTI2Mjk=", "avatar_url": "https://avatars.githubusercontent.com/u/4952629?v=4", "gravatar_id": "", "url": "https://api.github.com/users/krislc", "html_url": "https://github.com/krislc", "followers_url": "https://api.github.com/users/krislc/followers", "following_url": "https://api.github.com/users/krislc/following{/other_user}", "gists_url": "https://api.github.com/users/krislc/gists{/gist_id}", "starred_url": "https://api.github.com/users/krislc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/krislc/subscriptions", "organizations_url": "https://api.github.com/users/krislc/orgs", "repos_url": "https://api.github.com/users/krislc/repos", "events_url": "https://api.github.com/users/krislc/events{/privacy}", "received_events_url": "https://api.github.com/users/krislc/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "What costs the most is the gradient computation, storing few predictions is ok general. I can run a sequence classification training with a batch of 32 of 128 sequence length and an acummulation of 3 with a 8GB GPU.\r\n\r\nDid you encounter during your experiments a memory issue? If yes, let me know and I will look at it.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,605
1,605
NONE
null
part-1: self.total_train_batch_size = self.args.train_batch_size * self.args.gradient_accumulation_steps ds = ( self.train_dataset.repeat() .shuffle(self.num_train_examples, seed=self.args.seed) .batch(self.total_train_batch_size, drop_remainder=self.args.dataloader_drop_last) .prefetch(tf.data.experimental.AUTOTUNE) ) part-2: for _ in tf.range(self.args.gradient_accumulation_steps): reduced_features = { k: ft[: self.args.train_batch_size // self.args.n_replicas] for k, ft in features.items() } reduced_labels = labels[: self.args.train_batch_size // self.args.n_replicas] self.training_step(reduced_features, reduced_labels) features = { k: tf.concat( [ft[self.args.train_batch_size // self.args.n_replicas :], reduced_features[k]], axis=0, ) for k, ft in features.items() } labels = tf.concat( [labels[self.args.train_batch_size // self.args.n_replicas :], reduced_labels], axis=0 ) the implementation of gradient_accumulation seems not friendly to users who have small gpu memory.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6874/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6873
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6873/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6873/comments
https://api.github.com/repos/huggingface/transformers/issues/6873/events
https://github.com/huggingface/transformers/issues/6873
689,916,105
MDU6SXNzdWU2ODk5MTYxMDU=
6,873
Memory blowup with TPU Trainer in master
{ "login": "misrasaurabh1", "id": 1271289, "node_id": "MDQ6VXNlcjEyNzEyODk=", "avatar_url": "https://avatars.githubusercontent.com/u/1271289?v=4", "gravatar_id": "", "url": "https://api.github.com/users/misrasaurabh1", "html_url": "https://github.com/misrasaurabh1", "followers_url": "https://api.github.com/users/misrasaurabh1/followers", "following_url": "https://api.github.com/users/misrasaurabh1/following{/other_user}", "gists_url": "https://api.github.com/users/misrasaurabh1/gists{/gist_id}", "starred_url": "https://api.github.com/users/misrasaurabh1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/misrasaurabh1/subscriptions", "organizations_url": "https://api.github.com/users/misrasaurabh1/orgs", "repos_url": "https://api.github.com/users/misrasaurabh1/repos", "events_url": "https://api.github.com/users/misrasaurabh1/events{/privacy}", "received_events_url": "https://api.github.com/users/misrasaurabh1/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Indeed this seems very problematic. Let's look into it cc @sgugger ", "Some hints - The main process takes 3.5x more RAM than the other processes individually.", "Do you have a commit id that gives the first graph, so we can look into the diff?", "I think I'm having a similar issue. I'm using `n1-highmem-16 (16 vCPUs, 104 GB memory)` with `v3-8` TPU for pre-training a RoBERTa model on 24GB text data.\r\n\r\nI was able to load the dataset using `nlp` (https://github.com/huggingface/nlp/issues/532), but it eats up all the available memory during training.\r\n\r\n<img width=\"860\" alt=\"Screen Shot 2020-09-01 at 9 19 17 PM\" src=\"https://user-images.githubusercontent.com/20531705/91850804-213cb700-ec99-11ea-853a-2e8a433bfbff.png\">\r\n\r\n(master branch on Aug 25 installed with `pip install git+https://github.com/huggingface/transformers`. Not sure how to check a commit id...)\r\n ", "Same question. I was wondering are there any strategies implemented to save memory?\r\nSomething like lazyDataloader?", "@sgugger I retried a run with the commit id 86c07e634f3624cdf3f9e4e81ca53b808c4b22c6 (20 Aug) and it seems to not have this memory blowup that we see on the current master \r\n![image](https://user-images.githubusercontent.com/1271289/91885268-5a930980-ec3c-11ea-93e3-3f07d6f1af97.png)\r\n", "@shizhediao Because the default behavior of Huggingface TPU Trainer is to load features into memory 8 times into all the processes separately, it quickly eats up vast amounts of system memory.\r\nThere are two options to save memory-\r\n1. Write a lazy loading Dataset whose `__getitem__` function quickly loads features from disk when provided with the key. This could save the most memory. Even though I haven't tested this I suspect the disk random lookup and IO in the critical path of the training loop could become a bottleneck.\r\n2. Cache the features in memory only once and share them among all the processes. I did this by using an in-memory key value server Redis by dumping all the pickled features to redis server and writing the `__getitem__` function where it loads the key from the redis server when requested. I saw empirically that this made by training about 20% faster on my workload than loading all the features 8 times into memory (probably due to cache thrashing). I used unix sockets to make the lookups even faster.", "Thanks for your reply!\r\nWould you like to share your code or are there any open-sourced code I can refer to?\r\nThanks!", "Sure, this is in the `__init__` function of my Dataset function. As compared to Huggingface TextDataset, this particular way sped up training by 20% for me while using around 1/7 memory and generating features faster (due to less tail-latency in multiprocessing and not writing and reading features from disk)\r\n```\r\n file_paths_copy = copy.deepcopy(file_paths)\r\n file_paths_copy = sorted(file_paths_copy) #multiprocess env, we want all processes to have the files in the same order\r\n self.redis = redis.Redis(unix_socket_path=\"/tmp/redis.sock\")\r\n self.pipe = self.redis.pipeline()\r\n file_lineno_map = {}\r\n line_counter = 0\r\n for file in file_paths_copy:\r\n num_lines = count_lines(file)\r\n\r\n file_lineno_map[file] = line_counter\r\n line_counter += num_lines\r\n # This is so that lines in each file gets assigned a unique line number in a multi-process env\r\n self.num_examples = line_counter\r\n for index, file_path in enumerate(file_paths_copy): # Can be multiple files\r\n if index % xm.xrt_world_size() == xm.get_ordinal():\r\n # If this process is assigned to process the following file, so we can use 8 cpu cores to load data parallely\r\n\r\n logger.info(\"Creating features from dataset file at %s\", file_path)\r\n with open(file_path, encoding=\"utf-8\") as f:\r\n for line_num, line in enumerate(f.read().splitlines()): # Text to Text file where each file is an example and source and target is separated by a tab symbol\r\n if (len(line) > 0 and not line.isspace()):\r\n if line.find('\\t') == -1:\r\n logger.warning(\r\n f\"Encountered a line without tab separator in file {file_path} line {line_num+1}\"\r\n )\r\n continue\r\n input, output = line.split('\\t')\r\n features = self.text_pair_to_features(input, output)\r\n\r\n key = line_num + file_lineno_map[\r\n file_path] if not self.val else \"val-\" + str(\r\n line_num + file_lineno_map[file_path]) # The name of the redis key\r\n\r\n self.pipe.set(key, pickle.dumps(features))\r\n if line_num % self.num_operations_pipelined == 1:\r\n self.pipe.execute() # So that we only dump to redis as a batch, can speed up writing\r\n self.pipe.execute()\r\n if is_torch_tpu_available():\r\n xm.rendezvous(tag=\"featuresGenerated\") # So that the multi-process environment all wait for each other before doing anything else\r\n```\r\nWith the `__getitem__` function being\r\n```\r\n def __getitem__(self, i) -> Dict[str, torch.Tensor]:\r\n if self.val:\r\n key = f\"val-{i}\"\r\n else:\r\n key = i\r\n example = pickle.loads(self.redis.get(key))\r\n return {\"input_ids\": example[0], \"attention_masks\": example[1], \"labels\": example[2]}\r\n```", "Thanks so much!", "Cool dataset!\r\n\r\n`Seq2SeqDataset` is also lazy, but no redis. I wonder the speed difference: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py#L159\r\n\r\n@patil-suraj is this going to be an issue for `Seq2SeqTrainer`? We can't read all examples into memory for MT.", "@sshleifer Not sure. I have yet to experiment with `Seq2SeqTrainer` on TPU so can't say much. But I have managed to successfully train t5-base and on TPU using `Trainer` with lazy dataset.", "@sshleifer @patil-suraj I studied the linecache way of doing things and the reasons for not going with linecache for me were\r\n- Our data files are on mounted network disks so first byte access time would be too large.\r\n- Data sharded in multiple files leading to linecache being less effective as compared to just one file.\r\n- I also suspect how much would linecache help because we are not reading lines sequentially where caching would have helped but rather reading random lines where reading a whole block of text from disk would still mean that on average we only use only one line from the block.\r\n- I am also generally wary of involving disks in the critical path of the training loop as disks are very slow. Given that TPU requires higher input feed rate and evidence that Huggingface Trainer only uses a single CPU worker rather than many which could have helped with CPU generating features from disk in parallel while the TPU was working. See https://github.com/huggingface/transformers/issues/6316 . I believe if multiple workers were allowed in DataLoader then loading features from disk would be a valid solution.", "@misrasaurabh1 We just merged a simple fix that was obviously leaking memory for training (non-detached tensors) and that came from a recent change, so it might very well be the source of your leaks. Could you confirm whether or not current master has the leak or not? If so, using the same fix in the evaluation loop should also fix the eval memory leak we currently have.", "Yes, with the latest master the memory leak during training is not there anymore! Memory usage seems to be constant during training.\r\n\r\n![image](https://user-images.githubusercontent.com/1271289/92520308-40bf6c80-f1d0-11ea-9ef0-0edabd646527.png)\r\n\r\nAlthough if the same `.detach()` method would fix the evaluation memory leak, that would be huge! I could go down from a 32-CPU 208GB machine I am using right now to something like 16-CPU 64GB machine resulting in big monetary savings over time.", "Will look at the evaluation leak a bit more. From a first read, it looks like everything is properly detached, so it seems like this leak has another cause.\r\n\r\nThanks a lot for checking!", "\r\n\r\n> @shizhediao Because the default behavior of Huggingface TPU Trainer is to load features into memory 8 times into all the processes separately, it quickly eats up vast amounts of system memory.\r\n> There are two options to save memory-\r\n> \r\n> 1. Write a lazy loading Dataset whose `__getitem__` function quickly loads features from disk when provided with the key. This could save the most memory. Even though I haven't tested this I suspect the disk random lookup and IO in the critical path of the training loop could become a bottleneck.\r\n> 2. Cache the features in memory only once and share them among all the processes. I did this by using an in-memory key value server Redis by dumping all the pickled features to redis server and writing the `__getitem__` function where it loads the key from the redis server when requested. I saw empirically that this made by training about 20% faster on my workload than loading all the features 8 times into memory (probably due to cache thrashing). I used unix sockets to make the lookups even faster.\r\n\r\nRecently I had the same issue and such behavior is on GPU as well. One good solution is to use memory-mapped dataset, which is in spirit similar to Option 1 here. I used the awesome [huggingface/datasets](https://github.com/huggingface/datasets) library which provides memory-mapped dataset class automatically through Apache Arrow and it is fairly easy to use. I reduced my RAM usage from 90G to 6G and it won't grow with the dataset size.", "Is there any update on this? Is the memory leak during evaluation fixed?", "@sgugger Is the memory leak during evaluation fixed by https://github.com/huggingface/transformers/pull/7767 ?", "I don't know, as I have not had time to investigate the leak during evaluation on TPUs yet.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,635
1,608
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 (master) - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0a0+8fb7c50 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?:Yes, TPU v2-8 ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> @sgugger @sshleifer @patrickvonplaten ## Information Recent changes to the Trainer for TPU has resulted in memory blowup during training. On a machine with 208GB of RAM [sic], this was the memory profile with the master branch on 20th August. ![image](https://user-images.githubusercontent.com/1271289/91822376-90a89d00-ebec-11ea-9c95-948ea93d7e41.png) This only has increase in memory during evaluation (which is another memory leak bug https://github.com/huggingface/transformers/issues/5509). If you throw enough RAM to the problem, it stays in control. After the recent changes the memory profile has become this. ![image](https://user-images.githubusercontent.com/1271289/91826302-390d3000-ebf2-11ea-8dce-cd70b49f6ba7.png) Look how quickly the memory blows up even on this huge machine. I have implemented some optimizations to save memory where I am caching only a single copy of features on redis-server but that is not enough now. The most interesting thing to see is that now the memory also increases during training and not just evaluation. After these changes, Trainer for TPUs has become unusable for training any practical model and I request you to please look into fixing this. Model I am using (Bert, XLNet ...): T5 The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Use the TPU example run_language_modelling to reproduce. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Memory stays constant with the number of training and evaluation iterations. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6873/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6873/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6872
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6872/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6872/comments
https://api.github.com/repos/huggingface/transformers/issues/6872/events
https://github.com/huggingface/transformers/issues/6872
689,908,026
MDU6SXNzdWU2ODk5MDgwMjY=
6,872
transformer multitasking
{ "login": "max-yue", "id": 13486398, "node_id": "MDQ6VXNlcjEzNDg2Mzk4", "avatar_url": "https://avatars.githubusercontent.com/u/13486398?v=4", "gravatar_id": "", "url": "https://api.github.com/users/max-yue", "html_url": "https://github.com/max-yue", "followers_url": "https://api.github.com/users/max-yue/followers", "following_url": "https://api.github.com/users/max-yue/following{/other_user}", "gists_url": "https://api.github.com/users/max-yue/gists{/gist_id}", "starred_url": "https://api.github.com/users/max-yue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/max-yue/subscriptions", "organizations_url": "https://api.github.com/users/max-yue/orgs", "repos_url": "https://api.github.com/users/max-yue/repos", "events_url": "https://api.github.com/users/max-yue/events{/privacy}", "received_events_url": "https://api.github.com/users/max-yue/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
CONTRIBUTOR
null
Doing multitasking using transformers library such as this one https://github.com/JayYip/bert-multitask-learning. cws|NER|weibo_ner&weibo_cws, one problem will be sampled at each turn, say weibo_ner&weibo_cws, then weibo_ner and weibo_cws will trained for this turn together.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6872/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6871
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6871/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6871/comments
https://api.github.com/repos/huggingface/transformers/issues/6871/events
https://github.com/huggingface/transformers/issues/6871
689,882,423
MDU6SXNzdWU2ODk4ODI0MjM=
6,871
Albert loads model on both CPU and GPU at the same time
{ "login": "vivekatwal", "id": 10752998, "node_id": "MDQ6VXNlcjEwNzUyOTk4", "avatar_url": "https://avatars.githubusercontent.com/u/10752998?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vivekatwal", "html_url": "https://github.com/vivekatwal", "followers_url": "https://api.github.com/users/vivekatwal/followers", "following_url": "https://api.github.com/users/vivekatwal/following{/other_user}", "gists_url": "https://api.github.com/users/vivekatwal/gists{/gist_id}", "starred_url": "https://api.github.com/users/vivekatwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vivekatwal/subscriptions", "organizations_url": "https://api.github.com/users/vivekatwal/orgs", "repos_url": "https://api.github.com/users/vivekatwal/repos", "events_url": "https://api.github.com/users/vivekatwal/events{/privacy}", "received_events_url": "https://api.github.com/users/vivekatwal/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details MODEL_DEVICE = torch.device('cuda') MODEL_PATH='./models' tokenizer = AlbertTokenizer.from_pretrained(MODEL_PATH) qa_model = AlbertForQuestionAnswering.from_pretrained(MODEL_PATH).to(MODEL_DEVICE) I am using the above code to load the model. This model occupies both RAM memory(1.5 GB) and GPU memory(650 MB). I have specified torch device as Cuda, But still, it doesn't behave as expected. When "cpu" is specified it works well and doesn't load into GPU. But when cuda is specified it load in CPU and GPU as well. I tried , "cuda", "cuda:0" Any solution to this ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6871/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6870
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6870/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6870/comments
https://api.github.com/repos/huggingface/transformers/issues/6870/events
https://github.com/huggingface/transformers/pull/6870
689,868,947
MDExOlB1bGxSZXF1ZXN0NDc2ODU2MzYx
6,870
Updata tokenization_auto.py
{ "login": "hjptriplebee", "id": 22477665, "node_id": "MDQ6VXNlcjIyNDc3NjY1", "avatar_url": "https://avatars.githubusercontent.com/u/22477665?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hjptriplebee", "html_url": "https://github.com/hjptriplebee", "followers_url": "https://api.github.com/users/hjptriplebee/followers", "following_url": "https://api.github.com/users/hjptriplebee/following{/other_user}", "gists_url": "https://api.github.com/users/hjptriplebee/gists{/gist_id}", "starred_url": "https://api.github.com/users/hjptriplebee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hjptriplebee/subscriptions", "organizations_url": "https://api.github.com/users/hjptriplebee/orgs", "repos_url": "https://api.github.com/users/hjptriplebee/repos", "events_url": "https://api.github.com/users/hjptriplebee/events{/privacy}", "received_events_url": "https://api.github.com/users/hjptriplebee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, we have this [test](https://github.com/huggingface/transformers/blob/master/tests/test_configuration_auto.py#L45) to prevent exactly this. In what situation did you face an issue?", "@LysandreJik There is no problem for pretrained models of huggingface transformers, because the config class of them are inherited from \"PretrainedConfig\". However, for users who want to add new models, their self-defined config class may be inherited from config class of some existing pretrained models. For example, I am trying to add a new model based on \"BART\" and my NewBartConig is inherited from \"BartConig\". My new tokenizer will not be used because a “NewBartConig” object is an instance of \"BartConig\" and bart tokenizer will be used incorrectly.", "Yes, but we have similar issues with models, for example the `RobertaModel` inherits from `BertModel`. The test I mentioned above checks that (the example here is for configurations but we have the same test for models and tokenizers).\r\n\r\nCurrently the way to make sure your tokenizer is used and not the one on which it's depending is to put your tokenizer above the one it's inheriting from in the mapping. The for loop will then see this one first and use this one instead of the next one.", "@LysandreJik Changing the order of items in TOKENIZER_MAPPING can solve the problem indeed. But get the rid of mapping order is more user-friendly, right? Close the pull request if you don't think the pr is necessary. Thanks for the review" ]
1,598
1,600
1,600
CONTRIBUTOR
null
Updata tokenization_auto.py to handle Inherited config class. If ConfigB is inherited from ConfigA, then isinstance(ConfigB(), ConfigA) is true. We hope to use TokenizerB, but TokenizerA will be used incorrectly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6870/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6870", "html_url": "https://github.com/huggingface/transformers/pull/6870", "diff_url": "https://github.com/huggingface/transformers/pull/6870.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6870.patch", "merged_at": 1600944731000 }
https://api.github.com/repos/huggingface/transformers/issues/6869
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6869/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6869/comments
https://api.github.com/repos/huggingface/transformers/issues/6869/events
https://github.com/huggingface/transformers/issues/6869
689,849,832
MDU6SXNzdWU2ODk4NDk4MzI=
6,869
How does relative distance is computed for cross-attention in T5 model?
{ "login": "wasiahmad", "id": 17520413, "node_id": "MDQ6VXNlcjE3NTIwNDEz", "avatar_url": "https://avatars.githubusercontent.com/u/17520413?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wasiahmad", "html_url": "https://github.com/wasiahmad", "followers_url": "https://api.github.com/users/wasiahmad/followers", "following_url": "https://api.github.com/users/wasiahmad/following{/other_user}", "gists_url": "https://api.github.com/users/wasiahmad/gists{/gist_id}", "starred_url": "https://api.github.com/users/wasiahmad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wasiahmad/subscriptions", "organizations_url": "https://api.github.com/users/wasiahmad/orgs", "repos_url": "https://api.github.com/users/wasiahmad/repos", "events_url": "https://api.github.com/users/wasiahmad/events{/privacy}", "received_events_url": "https://api.github.com/users/wasiahmad/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
Let's assume we have a source sequence of length 7 and a target sequence of length 5. In cross-attention sublayer at each decoder layer, every token in the target sequence attends every token in the input sequence. In T5 model, we compute the relative distance to compute bias using query-len and key-len as in https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_t5.py#L289. My question is, how the distance between two tokens are computed if they belong to the source and target sequence. The relative distance matrix (5 x 7) would look like: ``` tensor([[ 0, 1, 2, 3, 4, 5, 6], [-1, 0, 1, 2, 3, 4, 5], [-2, -1, 0, 1, 2, 3, 4], [-3, -2, -1, 0, 1, 2, 3], [-4, -3, -2, -1, 0, 1, 2]]) ``` Once we put the distances into bucket for the cross-attention, it would look like: ``` tensor([[0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0], [2, 1, 0, 0, 0, 0, 0], [3, 2, 1, 0, 0, 0, 0], [4, 3, 2, 1, 0, 0, 0]]) ``` Given that cross-attention is a part of the decoder, the `bidirectional` flag is set to False. So, it means while decoding at step `i`, the decoder will treat all the source tokens at position `i, i+1, i+2, ...` having a distance `0` from the target token at position `i`. Is this correct?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6869/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6868
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6868/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6868/comments
https://api.github.com/repos/huggingface/transformers/issues/6868/events
https://github.com/huggingface/transformers/issues/6868
689,847,092
MDU6SXNzdWU2ODk4NDcwOTI=
6,868
MarianMTModel.generate error: Segmentation fault (core dumped)
{ "login": "sdhzlxm", "id": 7666659, "node_id": "MDQ6VXNlcjc2NjY2NTk=", "avatar_url": "https://avatars.githubusercontent.com/u/7666659?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sdhzlxm", "html_url": "https://github.com/sdhzlxm", "followers_url": "https://api.github.com/users/sdhzlxm/followers", "following_url": "https://api.github.com/users/sdhzlxm/following{/other_user}", "gists_url": "https://api.github.com/users/sdhzlxm/gists{/gist_id}", "starred_url": "https://api.github.com/users/sdhzlxm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sdhzlxm/subscriptions", "organizations_url": "https://api.github.com/users/sdhzlxm/orgs", "repos_url": "https://api.github.com/users/sdhzlxm/repos", "events_url": "https://api.github.com/users/sdhzlxm/events{/privacy}", "received_events_url": "https://api.github.com/users/sdhzlxm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I run the same demo program on another server. The program can work properly. " ]
1,598
1,598
1,598
NONE
null
When I use MarianMTModel to generate sequences, that is running "translated = model.generate(**tokenizer.prepare_seq2seq_batch(src_text))", the python development environment will quit automatically. There is a message, i. e. Segmentation fault (core dumped). When I run the program in Jupyter, it will also quit. Leave the message like this, "Kernel restarting The kernel appears to have died. It will restart automatically"
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6868/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6867
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6867/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6867/comments
https://api.github.com/repos/huggingface/transformers/issues/6867/events
https://github.com/huggingface/transformers/pull/6867
689,845,312
MDExOlB1bGxSZXF1ZXN0NDc2ODM3MDA2
6,867
[doc] typos
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=h1) Report\n> Merging [#6867](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59a6a32a61a87f9a1cccb57c3b4df725384d34ae?el=desc) will **decrease** coverage by `1.75%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6867/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6867 +/- ##\n==========================================\n- Coverage 79.91% 78.16% -1.76% \n==========================================\n Files 157 157 \n Lines 28795 28795 \n==========================================\n- Hits 23012 22508 -504 \n- Misses 5783 6287 +504 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.54% <0.00%> (-41.13%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `57.14% <0.00%> (-39.69%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `59.57% <0.00%> (-19.15%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=footer). Last update [59a6a32...e59e26a](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,599
1,599
CONTRIBUTOR
null
fixed typos
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6867/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6867", "html_url": "https://github.com/huggingface/transformers/pull/6867", "diff_url": "https://github.com/huggingface/transformers/pull/6867.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6867.patch", "merged_at": 1599043912000 }
https://api.github.com/repos/huggingface/transformers/issues/6866
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6866/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6866/comments
https://api.github.com/repos/huggingface/transformers/issues/6866/events
https://github.com/huggingface/transformers/pull/6866
689,827,967
MDExOlB1bGxSZXF1ZXN0NDc2ODIzMjIz
6,866
test_tf_common: remove un_used mixin class parameters
{ "login": "PuneethaPai", "id": 21996583, "node_id": "MDQ6VXNlcjIxOTk2NTgz", "avatar_url": "https://avatars.githubusercontent.com/u/21996583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PuneethaPai", "html_url": "https://github.com/PuneethaPai", "followers_url": "https://api.github.com/users/PuneethaPai/followers", "following_url": "https://api.github.com/users/PuneethaPai/following{/other_user}", "gists_url": "https://api.github.com/users/PuneethaPai/gists{/gist_id}", "starred_url": "https://api.github.com/users/PuneethaPai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PuneethaPai/subscriptions", "organizations_url": "https://api.github.com/users/PuneethaPai/orgs", "repos_url": "https://api.github.com/users/PuneethaPai/repos", "events_url": "https://api.github.com/users/PuneethaPai/events{/privacy}", "received_events_url": "https://api.github.com/users/PuneethaPai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=h1) Report\n> Merging [#6866](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59a6a32a61a87f9a1cccb57c3b4df725384d34ae?el=desc) will **decrease** coverage by `0.24%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6866/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6866 +/- ##\n==========================================\n- Coverage 79.91% 79.67% -0.25% \n==========================================\n Files 157 157 \n Lines 28795 28795 \n==========================================\n- Hits 23012 22942 -70 \n- Misses 5783 5853 +70 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `95.00% <0.00%> (+13.33%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=footer). Last update [59a6a32...b4f1cad](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "The next step is to look for other places those class attributes are defined and remove them:\r\n\r\n\r\n```bash\r\n$ git grep test_pruning | grep tf\r\ntests/test_modeling_tf_distilbert.py: test_pruning = True\r\ntests/test_modeling_tf_longformer.py: test_pruning = False # pruning is not supported\r\ntests/test_modeling_tf_transfo_xl.py: test_pruning = False\r\ntests/test_modeling_tf_xlnet.py: test_pruning = False\r\n```\r\n```bash\r\n$ git grep test_torchscript | grep tf\r\ntests/test_modeling_tf_common.py: test_torchscript = True\r\ntests/test_modeling_tf_distilbert.py: test_torchscript = True\r\ntests/test_modeling_tf_longformer.py: test_torchscript = False\r\ntests/test_modeling_tf_transfo_xl.py: test_torchscript = False\r\n```\r\n", "Thanks for the support.\r\nI also found `test_head_masking` which was unused. So deleted it too. Let me know if you didn't want that to happen.\r\n\r\n```bash\r\n$ git grep -e \"test_head\" | grep tf\r\ntests/test_modeling_tf_distilbert.py: test_head_masking = True\r\ntests/test_modeling_tf_longformer.py: test_headmasking = False # head masking is not supported\r\n```\r\n\r\nThanks\r\n\r\nPS: suggestion for any other issue, which I can pick up would be great. I am looking under label `help wanted`, etc" ]
1,598
1,599
1,599
CONTRIBUTOR
null
Fixes #6590 @sshleifer Is that all for this issue? are there any other cleaning left which I did not understand ❗ Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6866/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6866", "html_url": "https://github.com/huggingface/transformers/pull/6866", "diff_url": "https://github.com/huggingface/transformers/pull/6866.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6866.patch", "merged_at": 1599058480000 }
https://api.github.com/repos/huggingface/transformers/issues/6865
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6865/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6865/comments
https://api.github.com/repos/huggingface/transformers/issues/6865/events
https://github.com/huggingface/transformers/issues/6865
689,809,469
MDU6SXNzdWU2ODk4MDk0Njk=
6,865
Is it possible to finetune reformer model for summarization task?
{ "login": "banunitte", "id": 6847024, "node_id": "MDQ6VXNlcjY4NDcwMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/6847024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/banunitte", "html_url": "https://github.com/banunitte", "followers_url": "https://api.github.com/users/banunitte/followers", "following_url": "https://api.github.com/users/banunitte/following{/other_user}", "gists_url": "https://api.github.com/users/banunitte/gists{/gist_id}", "starred_url": "https://api.github.com/users/banunitte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/banunitte/subscriptions", "organizations_url": "https://api.github.com/users/banunitte/orgs", "repos_url": "https://api.github.com/users/banunitte/repos", "events_url": "https://api.github.com/users/banunitte/events{/privacy}", "received_events_url": "https://api.github.com/users/banunitte/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "There are no pre-trained reformer weights yet -> so that's a no sadly", "Following this issue for updates.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,607
1,607
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6865/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6864
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6864/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6864/comments
https://api.github.com/repos/huggingface/transformers/issues/6864/events
https://github.com/huggingface/transformers/issues/6864
689,800,005
MDU6SXNzdWU2ODk4MDAwMDU=
6,864
How to save the whole model as SavedModel format for inference?
{ "login": "HX-idiot", "id": 38073340, "node_id": "MDQ6VXNlcjM4MDczMzQw", "avatar_url": "https://avatars.githubusercontent.com/u/38073340?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HX-idiot", "html_url": "https://github.com/HX-idiot", "followers_url": "https://api.github.com/users/HX-idiot/followers", "following_url": "https://api.github.com/users/HX-idiot/following{/other_user}", "gists_url": "https://api.github.com/users/HX-idiot/gists{/gist_id}", "starred_url": "https://api.github.com/users/HX-idiot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HX-idiot/subscriptions", "organizations_url": "https://api.github.com/users/HX-idiot/orgs", "repos_url": "https://api.github.com/users/HX-idiot/repos", "events_url": "https://api.github.com/users/HX-idiot/events{/privacy}", "received_events_url": "https://api.github.com/users/HX-idiot/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "The model architecture is simple:\r\n![image](https://user-images.githubusercontent.com/38073340/91795291-db0f3580-ec4f-11ea-814f-2342ecfd1b1a.png)\r\n", "Sorry for the late reply. This is because you did not respect the signature of `TFBertMainLayer` in order to properly use it you can do:\r\n\r\n```python\r\nimport tensorflow as tf\r\nfrom transformers import TFBertForSequenceClassification\r\n\r\na = tf.constant([[1,2,3,4,5]])\r\nb = tf.constant([[1,1,1,1,1]])\r\ninp = {\"input_ids\": a, \"attention_mask\": b}\r\nmodel = TFBertForSequenceClassification.from_pretrained(\"bert-base-cased\")\r\nmodel._saved_model_inputs_spec = None\r\nmodel._set_save_spec(inp)\r\ntf.saved_model.save(model, \"/tmp\")\r\nmodel = tf.keras.models.load_model(\"/tmp\")\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,605
1,605
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details Hi, I want to save the mode as SavedModel format using model.save(), but when i load it, the input_format is fixed and only input_ids can be used, how can i pass the inputs like {'input_dis':XX, 'attention_mask':XX}? codes: ![image](https://user-images.githubusercontent.com/38073340/91794928-df871e80-ec4e-11ea-86a8-9c264913b5b7.png) and then it reports: ![image](https://user-images.githubusercontent.com/38073340/91794969-f168c180-ec4e-11ea-98e1-2e1db94632ea.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6864/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6863
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6863/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6863/comments
https://api.github.com/repos/huggingface/transformers/issues/6863/events
https://github.com/huggingface/transformers/issues/6863
689,781,984
MDU6SXNzdWU2ODk3ODE5ODQ=
6,863
special token inconsistency for [UNK] token
{ "login": "cceyda", "id": 15624271, "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cceyda", "html_url": "https://github.com/cceyda", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "organizations_url": "https://api.github.com/users/cceyda/orgs", "repos_url": "https://api.github.com/users/cceyda/repos", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "received_events_url": "https://api.github.com/users/cceyda/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I think that's reasonable, the point of `skip_special_tokens` isn't to skip unknown tokens. cf @mfuntowicz @thomwolf @n1t0 ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.0.2 - Platform: Linux-4.15.0-108-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 ### Who can help tokenizers: @mfuntowicz ## Information Bert tokenizer treats [UNK] token as **special** during `tokenizer.convert_ids_to_tokens(...,skip_special_tokens=True) ` while `tokenizer(...,return_special_tokens_mask=True)` doesn't (for obvious reasons). I think it would be better to preserve [UNK] tokens in `convert_ids_to_tokens` for consistency of the term "**special token**". ## To reproduce ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") # fast tokenizer also has the same problem # tokenizer = AutoTokenizer.from_pretrained("bert-large-cased",use_fast=True) text="sentence 한 셔 word" print('tokens:',tokenizer.tokenize(text)) tokens=tokenizer(text,return_special_tokens_mask=True) print("input_ids:",tokens["input_ids"]) print("special_token_mask:",tokens["special_tokens_mask"]) no_special=tokenizer.convert_ids_to_tokens(tokens["input_ids"], skip_special_tokens=True) special=tokenizer.convert_ids_to_tokens(tokens["input_ids"]) print('tokens from ids (skip special): ',no_special) print('tokens from ids (skip special): ',special) print('special tokens',tokenizer.all_special_tokens) ``` ## Expected behavior Also I think keeping [UNK] would be a better behavior, as `convert_ids_to_tokens` is used in inference pipelines and using skip_special_token for getting rid of [CLS][SEP][PAD] tokens leads to unintended loss of [UNK] tokens, which is important. Related: https://github.com/huggingface/transformers/issues/4391
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6863/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6862
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6862/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6862/comments
https://api.github.com/repos/huggingface/transformers/issues/6862/events
https://github.com/huggingface/transformers/pull/6862
689,774,780
MDExOlB1bGxSZXF1ZXN0NDc2NzgxNTgy
6,862
cleanup: fix typo chunk_size_feed_forward in configuration_utils.py
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=h1) Report\n> Merging [#6862](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/367235ee52537ff7cada5e1c5c41cdd78731f092?el=desc) will **increase** coverage by `3.77%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6862/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6862 +/- ##\n==========================================\n+ Coverage 76.27% 80.04% +3.77% \n==========================================\n Files 157 157 \n Lines 28795 28794 -1 \n==========================================\n+ Hits 21963 23049 +1086 \n+ Misses 6832 5745 -1087 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.97% <ø> (-0.70%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-1.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=footer). Last update [367235e...b1b2b17](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
Problem: ```python self.chunk_size_feed_forward = kwargs.pop("chunk_size_feed_forward", 0) # line 178 self.chunk_size_feed_forward = kwargs.pop("chunk_size_feed_forwar", 0)# line 198 ``` in https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_utils.py#L178 Solution: delete L198
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6862/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6862", "html_url": "https://github.com/huggingface/transformers/pull/6862", "diff_url": "https://github.com/huggingface/transformers/pull/6862.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6862.patch", "merged_at": 1598946208000 }
https://api.github.com/repos/huggingface/transformers/issues/6861
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6861/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6861/comments
https://api.github.com/repos/huggingface/transformers/issues/6861/events
https://github.com/huggingface/transformers/pull/6861
689,747,379
MDExOlB1bGxSZXF1ZXN0NDc2NzYwMDgw
6,861
add a final report to all pytest jobs
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=h1) Report\n> Merging [#6861](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/431ab19d7a467905018b165bc29b2a1130c1b188?el=desc) will **increase** coverage by `3.38%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6861/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6861 +/- ##\n==========================================\n+ Coverage 76.81% 80.20% +3.38% \n==========================================\n Files 157 157 \n Lines 28795 28795 \n==========================================\n+ Hits 22118 23094 +976 \n+ Misses 6677 5701 -976 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.02%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.08% <0.00%> (-0.51%)` | :arrow_down: |\n| ... and [25 more](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=footer). Last update [431ab19...39d237c](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
we had it added for one job (run_examples_torch), please add it to all pytest jobs - we need the output of what tests were run to debug the codecov issue. thank you! To remind `pytest -rA` finalizes the test run with a report like this: ``` PASSED examples/seq2seq/test_seq2seq_examples.py::test_seq2seq_dataset_truncation[patrickvonplaten/t5-tiny-random] PASSED examples/seq2seq/test_seq2seq_examples.py::test_seq2seq_dataset_truncation[sshleifer/bart-tiny-random] PASSED examples/seq2seq/test_seq2seq_examples.py::test_seq2seq_dataset_truncation[google/pegasus-xsum] PASSED examples/seq2seq/test_seq2seq_examples.py::test_legacy_dataset_truncation[sshleifer/bart-tiny-random] PASSED examples/seq2seq/test_seq2seq_examples.py::test_legacy_dataset_truncation[bert-base-cased] PASSED examples/test_examples.py::ExamplesTests::test_run_language_modeling PASSED examples/test_examples.py::ExamplesTests::test_run_pl_glue PASSED examples/test_examples.py::ExamplesTests::test_run_squad PASSED examples/bert-loses-patience/test_run_glue_with_pabee.py::PabeeTests::test_run_glue SKIPPED [1] examples/seq2seq/test_bash_script.py:25: too slow to run on CPU SKIPPED [1] examples/seq2seq/test_bash_script.py:32: too slow to run on CPU ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6861/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6861/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6861", "html_url": "https://github.com/huggingface/transformers/pull/6861", "diff_url": "https://github.com/huggingface/transformers/pull/6861.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6861.patch", "merged_at": 1598928443000 }
https://api.github.com/repos/huggingface/transformers/issues/6860
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6860/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6860/comments
https://api.github.com/repos/huggingface/transformers/issues/6860/events
https://github.com/huggingface/transformers/issues/6860
689,662,070
MDU6SXNzdWU2ODk2NjIwNzA=
6,860
Support nested data structures for input data
{ "login": "alex2awesome", "id": 3460632, "node_id": "MDQ6VXNlcjM0NjA2MzI=", "avatar_url": "https://avatars.githubusercontent.com/u/3460632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex2awesome", "html_url": "https://github.com/alex2awesome", "followers_url": "https://api.github.com/users/alex2awesome/followers", "following_url": "https://api.github.com/users/alex2awesome/following{/other_user}", "gists_url": "https://api.github.com/users/alex2awesome/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex2awesome/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex2awesome/subscriptions", "organizations_url": "https://api.github.com/users/alex2awesome/orgs", "repos_url": "https://api.github.com/users/alex2awesome/repos", "events_url": "https://api.github.com/users/alex2awesome/events{/privacy}", "received_events_url": "https://api.github.com/users/alex2awesome/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Note that there are multiple frameworks that provide generic training loops. The goal of `Trainer` (I'm assuming you're talking about it since there is no `train.py` file) is not to replace them or compete with them but to provide an easy way to train and finetune Transformers models. Those models don't take nested inputs, so Trainer does not support this. Those models are expected to return the loss as the first item of their output, so Trainer expects it too.\r\n\r\nMaking Trainer more easily customizable by providing better hooks for subclassing (your use case could be done by overriding the two private methods you mention for instance) is something we are working on, but we won't have a base Trainer that is too generic, it will remain customized to the models the library provides.", "Thank you for your consideration and comments! ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
# 🚀 Feature request Support in `train.py` for more complicated/nested data inputs rather than the current assumed single-layer dictionary structure. ## Motivation In many applications -- for example, mine, where I wish to implement a semi-supervised learning protocol -- the training input might need to be different than the 1-layer dictionary that `train.py` is currently hard-coded to accept. For instance, in my case, I need a nested structure that supports: ``` {'supervised_data': {'input_ids':[....], 'labels': [...], 'attention_mask'}, 'unsupervised_data': {'input_ids':[...], 'attention_mask'}, 'augmented_data': {'input_ids':[...], 'attention_mask'} ``` (I am attempting to implement the following paper, Unsupervised Data Augmentation for Consistency Training https://arxiv.org/abs/1904.12848). However, I can imagine other use-cases including MAML, multi-task learning and multi-modal learning where huggingface would provide a great framework but is currently limited in it's data input format. ## Your contribution I've identified a couple of quick fixes for this: Lines 962-974 of `train.py`, or the `_prepare_inputs` function should be rewritten as: ``` def _prepare_inputs(self, inputs): """ Prepare :obj:`inputs` before feeding them to the model, converting them to tensors if they are not already and handling potential state. """ def map_nested_dicts_modify(ob, func): if isinstance(ob, dict): return {k: map_nested_dicts_modify(v, func) for k, v in ob.items()} if isinstance(ob, list): return list(map(lambda x: map_nested_dicts_modify(x, func), ob)) else: return func(ob) def to_device(v): if isinstance(v, torch.Tensor): v = v.to(self.args.device) return v inputs = map_nested_dicts_modify(inputs, to_device) if self.args.past_index >= 0 and self._past is not None: inputs["mems"] = self._past return inputs ``` Line 1135 of `train.py` should be expanded to: ``` def _finditem(obj, key): if key in obj: return True for k, v in obj.items(): if isinstance(v, dict): return _finditem(v, key) has_labels = any(_finditem(inputs, k) is not None for k in ["labels", "lm_labels", "masked_lm_labels"]) ``` I'm not quite sure how to handle line 1254, or whether it is really that necessary, but one way might be to again use `_finditem` for `["labels", "input_ids"]`. And that's it -- it's then up to the user to modify their models so that their `DataCollator` generates the expected structure, and `forward` takes in the expected structure! Even if you explicitly don't wish to support these different training protocols mentioned above, I do think, from a software engineering perspective, that `train.py` should be more fully abstracted from particulars of the data inputs and model inputs. This feature takes you a step closer to that (although not completely, as the `has_labels` line still expects a certain set of keys _somewhere_ in the data input.) Better yet than the suggestions here would be to make the data input it's own class for full abstraction, but I can see the argument against you doing that, as it is yet another data class for users to learn and code up, and would be a breaking update for all those who have implemented `DataCollators` that adhere to your guidelines. Alex
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6860/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6859
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6859/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6859/comments
https://api.github.com/repos/huggingface/transformers/issues/6859/events
https://github.com/huggingface/transformers/pull/6859
689,476,114
MDExOlB1bGxSZXF1ZXN0NDc2NTE2MDYx
6,859
[fix] typo in available in helper function
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=h1) Report\n> Merging [#6859](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bbdba0a76d70ff347884cbe62e0f13de903d84c7?el=desc) will **increase** coverage by `2.94%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6859/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6859 +/- ##\n==========================================\n+ Coverage 77.22% 80.17% +2.94% \n==========================================\n Files 157 157 \n Lines 28793 28793 \n==========================================\n+ Hits 22235 23084 +849 \n+ Misses 6558 5709 -849 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.44% <0.00%> (-7.59%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=footer). Last update [bbdba0a...b8eb2a3](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
cc @sgugger will merge on ci passing.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6859/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6859", "html_url": "https://github.com/huggingface/transformers/pull/6859", "diff_url": "https://github.com/huggingface/transformers/pull/6859.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6859.patch", "merged_at": 1598911174000 }
https://api.github.com/repos/huggingface/transformers/issues/6858
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6858/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6858/comments
https://api.github.com/repos/huggingface/transformers/issues/6858/events
https://github.com/huggingface/transformers/issues/6858
689,473,953
MDU6SXNzdWU2ODk0NzM5NTM=
6,858
Remove hard-coded uses of float32 to fix mixed precision use in Distilbert
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Indeed! Do you want to open a PR to fix this?", "@LysandreJik \r\n\r\nI can do that. However @patrickvonplaten has already self-assigned for this. How do you think, @patrickvonplaten?", "Hey @chiapas, it would be great if you can open a PR for it :-) ", "Hi @patrickvonplaten , OK, that would be my first contribution to transformers :)" ]
1,598
1,599
1,599
COLLABORATOR
null
In this commit [Remove hard-coded uses of float32 to fix mixed precision use](https://github.com/huggingface/transformers/commit/4fca874ea995f3d23ad7062b07b5ed7c4f87c0cd#diff-e3ab4f29f29fe1d243a6b55fafaab097), the mixed precision issue is fixed for [modeling_tf_bert.py](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_bert.py). However, for [modeling_tf_distilbert.py](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_distilbert.py#L171), the line 171 is not fixed yet, and we get > 173 embeddings = inputs_embeds + position_embeddings # (bs, max_seq_length, dim) > --> 174 embeddings = self.LayerNorm(embeddings) # (bs, max_seq_length, dim) > InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a bfloat16 tensor but is a float tensor while using `mixed_bfloat16 mixed precision` with `TPU`. A very quick fix is the same as the fix for `modeling_tf_bert.py`: position_embeddings = tf.cast(self.position_embeddings(position_ids), inputs_embeds.dtype) @schmidek
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6858/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6857
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6857/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6857/comments
https://api.github.com/repos/huggingface/transformers/issues/6857/events
https://github.com/huggingface/transformers/pull/6857
689,424,813
MDExOlB1bGxSZXF1ZXN0NDc2NDc0NTQz
6,857
Split hp search methods
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Cool!" ]
1,598
1,598
1,598
COLLABORATOR
null
Follow-up from #6747. Cleanly separates the two backend-specific code (optuna vs Ray) with some small code duplication in the objective function.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6857/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6857", "html_url": "https://github.com/huggingface/transformers/pull/6857", "diff_url": "https://github.com/huggingface/transformers/pull/6857.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6857.patch", "merged_at": 1598901399000 }
https://api.github.com/repos/huggingface/transformers/issues/6856
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6856/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6856/comments
https://api.github.com/repos/huggingface/transformers/issues/6856/events
https://github.com/huggingface/transformers/issues/6856
689,411,961
MDU6SXNzdWU2ODk0MTE5NjE=
6,856
Changes in Pytorch 1.6 multinomial could break backward compatibility
{ "login": "andifunke", "id": 18445361, "node_id": "MDQ6VXNlcjE4NDQ1MzYx", "avatar_url": "https://avatars.githubusercontent.com/u/18445361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andifunke", "html_url": "https://github.com/andifunke", "followers_url": "https://api.github.com/users/andifunke/followers", "following_url": "https://api.github.com/users/andifunke/following{/other_user}", "gists_url": "https://api.github.com/users/andifunke/gists{/gist_id}", "starred_url": "https://api.github.com/users/andifunke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andifunke/subscriptions", "organizations_url": "https://api.github.com/users/andifunke/orgs", "repos_url": "https://api.github.com/users/andifunke/repos", "events_url": "https://api.github.com/users/andifunke/events{/privacy}", "received_events_url": "https://api.github.com/users/andifunke/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hey @andifunke, \r\n\r\nThanks a lot for your issue! Could you link the different implementation of `torch.multinomial` between PT v1.5.1 and PT v1.6.0 ? \r\nI understand your argument, but I think setting `replacement=True` is logically false...", "Hi @patrickvonplaten ,\r\n\r\nthanks for your reply!\r\n\r\n> Could you link the different implementation of torch.multinomial between PT v1.5.1 and PT v1.6.0 ?\r\n\r\nSure. The PR for the implementation is here: https://github.com/pytorch/pytorch/pull/39742 and the merge commit here: https://github.com/pytorch/pytorch/commit/97dfdaaad89c2082c90aebfa9180293847cffd60\r\n\r\n> I understand your argument, but I think setting replacement=True is logically false...\r\n\r\nI agree, it feels a bit hacky, but let me give you an example, why I think this workaround is justified:\r\n\r\nThe following code will behave differently in PT1.5.1 vs 1.6:\r\n\r\n```python\r\nimport torch\r\n\r\ntorch.manual_seed(0)\r\nt = torch.rand(10, 10)\r\n\r\ntorch.manual_seed(0)\r\na = torch.multinomial(t, num_samples=1, replacement=False)\r\n\r\ntorch.manual_seed(0)\r\nb = torch.multinomial(t, num_samples=1, replacement=True)\r\n\r\ntorch.__version__, a, b, all(a == b)\r\n```\r\n\r\nPytorch 1.5.1:\r\n\r\n```\r\nOut[1]: \r\n('1.5.1',\r\n tensor([[9],\r\n [7],\r\n [3],\r\n [9],\r\n [7],\r\n [6],\r\n [1],\r\n [3],\r\n [5],\r\n [1]]),\r\n tensor([[9],\r\n [7],\r\n [3],\r\n [9],\r\n [7],\r\n [6],\r\n [1],\r\n [3],\r\n [5],\r\n [1]]),\r\n True)\r\n```\r\n\r\nPytorch 1.6:\r\n\r\n```\r\n('1.6.0',\r\n tensor([[7],\r\n [7],\r\n [6],\r\n [1],\r\n [6],\r\n [1],\r\n [9],\r\n [5],\r\n [1],\r\n [2]]),\r\n tensor([[9],\r\n [7],\r\n [3],\r\n [9],\r\n [7],\r\n [6],\r\n [1],\r\n [3],\r\n [5],\r\n [1]]),\r\n False)\r\n```\r\n\r\nThis of course breaks reproducibility between versions when generating text.", "Oh, and here is another option, if `replacement=True` feels irritating:\r\n\r\nYou could use `torch.distributions.categorical.Categorical` instead, which uses the same sampling approach.\r\n\r\nexample:\r\n```python\r\nimport torch\r\n\r\ntorch.manual_seed(0)\r\nt = torch.rand(10, 10)\r\n\r\ntorch.manual_seed(0)\r\na = torch.distributions.categorical.Categorical(t).sample()\r\n\r\ntorch.manual_seed(0)\r\nb = torch.multinomial(t, num_samples=1, replacement=True)\r\n\r\ntorch.__version__, a, b, all(a == b.reshape(10))\r\n```\r\n\r\n```\r\nOut[1]:\r\n('1.6.0',\r\n tensor([9, 7, 3, 9, 7, 6, 1, 3, 5, 1]),\r\n tensor([[9],\r\n [7],\r\n [3],\r\n [9],\r\n [7],\r\n [6],\r\n [1],\r\n [3],\r\n [5],\r\n [1]]),\r\n True)\r\n```\r\n", "Hey @andifunke, \r\n\r\nThanks for your detailed comments - this is great! So it seems like the change was made to speed up the `torch.multinomial(do_replacement=False)` function. This is not really of interest to us though as it will never be the bottleneck in the `.generate()` function. \r\n\r\nI agree with you that we want to keep backward compatibility here. I think the best option in this case in to use `torch.distributions.categorical.Categorical(t).sample()` in this case.\r\n\r\nWill open a PR about it :-) ", "Great, thanks!", "Actually, I just noticed that `torch.distributions.categorical.Categorical(...)` uses `torch.multinomial` under the hood with `do_replacement=True` - so that this is not a better option. \r\n\r\nI'm not 100% sure how to proceed here now. @LysandreJik, @sgugger - what is your opinion on that? \r\n\r\nThe problem is the following: Because of a change in PyTorch's `torch.multinomial` function for 1.6, our generation method with `do_sample=True` yields different results when setting `torch.manual_seed(0)` between torch > 1.6 and 1.6.\r\n\r\nAs @andifunke pointed out, a simple fix would be to set `do_replacement=True`, which is logically not correct IMO, but it does not make a difference for sampling with `num_beams = 1`. For sampling with `num_beams > 1`.\r\n\r\nDo you guys think we should go for the simple fix of `do_replacement=True` to keep backward compatibility when using `torch.manual_seed(0)` ? \r\nIt seems like backwards compatibility for `num_beams > 1` is broken either way since it would be false to set `do_replacement=True` there. ", "Can we copy the old implementation somewhere and just use that or is it hidden in C/CUDA?", "Did we also reach out to the PyTorch team and make sure they are aware of this BC?", "Looks like this is hidden in C/CUDA: https://github.com/pytorch/pytorch/pull/39742/files .\r\nNot sure whether the PyTorch is aware of it...", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,605
1,605
NONE
null
## Environment info - `transformers` version: 3.0.2 - Platform: Linux-5.4.0-42-generic-x86_64-with-debian-bullseye-sid - Python version: 3.6.10 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ### Who can help @TevenLeScao ## Information Model I am using: Gpt2, but relevant to all language models using sampling. This is not really a bug report on behalf of transformers, but rather a suggestion to handle a breaking change in the behavior of `torch.multinomial()` that was introduced with v1.6. I've noticed, that the sampling in `GenerationMixin._generate_no_beam_search()` has changed between PT1.5.1 and PT1.6, even when given the same inputs and random state. This is due to a new implementation path of `torch.multinomial()` with `replacement=False`. This new implementation breaks determinism compared to older PT versions. However, there is an easy fix to return to the previous behavior in order to maintain backward compatibility. Since `num_samples` is set to 1, the actual `replacement` value is irrelevant. But setting `replacement` to `True` will use the old sampling implementation and return deterministic results equal to those of earlier versions. I would therefore recommend to change the call to `multinomial()` in `_generate_no_beam_search()` like this: ```next_token = torch.multinomial(probs, num_samples=1, replacement=True).squeeze(1)``` Best regards, Andreas
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6856/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6856/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6855
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6855/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6855/comments
https://api.github.com/repos/huggingface/transformers/issues/6855/events
https://github.com/huggingface/transformers/issues/6855
689,410,514
MDU6SXNzdWU2ODk0MTA1MTQ=
6,855
Hugging face - RuntimeError: Caught RuntimeError in replica 0 on device 0 on Azure Databricks
{ "login": "jay2017-git", "id": 69466515, "node_id": "MDQ6VXNlcjY5NDY2NTE1", "avatar_url": "https://avatars.githubusercontent.com/u/69466515?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jay2017-git", "html_url": "https://github.com/jay2017-git", "followers_url": "https://api.github.com/users/jay2017-git/followers", "following_url": "https://api.github.com/users/jay2017-git/following{/other_user}", "gists_url": "https://api.github.com/users/jay2017-git/gists{/gist_id}", "starred_url": "https://api.github.com/users/jay2017-git/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jay2017-git/subscriptions", "organizations_url": "https://api.github.com/users/jay2017-git/orgs", "repos_url": "https://api.github.com/users/jay2017-git/repos", "events_url": "https://api.github.com/users/jay2017-git/events{/privacy}", "received_events_url": "https://api.github.com/users/jay2017-git/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Try smaller batch sizes and/or bigger GPUs", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
How do I run the run_language_modeling.py script from hugging face using the pretrained roberta case model to fine-tune using my own data on the Azure databricks with a GPU cluster. Using Transformer version 2.9.1 and 3.0 . Python 3.6 Torch `1.5.0 torchvision 0.6 This is the script I ran below on Azure databricks ` %run '/dbfs/FileStore/tables/dev/run_language_modeling.py' \ --output_dir='/dbfs/FileStore/tables/final_train/models/roberta_base_reduce_n' \ --model_type=roberta \ --model_name_or_path=roberta-base \ --do_train \ --num_train_epochs 5 \ --train_data_file='/dbfs/FileStore/tables/final_train/train_data/all_data_desc_list_full.txt' \ --mlm ` This is the error I get after running the above command. ` RuntimeError Traceback (most recent call last) /dbfs/FileStore/tables/dev/run_language_modeling.py in <module> 279 280 if __name__ == "__main__": --> 281 main() /dbfs/FileStore/tables/dev/run_language_modeling.py in main() 243 else None 244 ) --> 245 trainer.train(model_path=model_path) 246 trainer.save_model() 247 # For convenience, we also re-save the tokenizer to the same directory, /databricks/python/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path) 497 continue 498 --> 499 tr_loss += self._training_step(model, inputs, optimizer) 500 501 if (step + 1) % self.args.gradient_accumulation_steps == 0 or ( /databricks/python/lib/python3.7/site-packages/transformers/trainer.py in _training_step(self, model, inputs, optimizer) 620 inputs["mems"] = self._past 621 --> 622 outputs = model(**inputs) 623 loss = outputs[0] # model outputs are always tuple in transformers (see doc) 624 /databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 548 result = self._slow_forward(*input, **kwargs) 549 else: --> 550 result = self.forward(*input, **kwargs) 551 for hook in self._forward_hooks.values(): 552 hook_result = hook(self, input, result) /databricks/python/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs) 153 return self.module(*inputs[0], **kwargs[0]) 154 replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) --> 155 outputs = self.parallel_apply(replicas, inputs, kwargs) 156 return self.gather(outputs, self.output_device) 157 /databricks/python/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in parallel_apply(self, replicas, inputs, kwargs) 163 164 def parallel_apply(self, replicas, inputs, kwargs): --> 165 return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) 166 167 def gather(self, outputs, output_device): /databricks/python/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py in parallel_apply(modules, inputs, kwargs_tup, devices) 83 output = results[i] 84 if isinstance(output, ExceptionWrapper): ---> 85 output.reraise() 86 outputs.append(output) 87 return outputs /databricks/python/lib/python3.7/site-packages/torch/_utils.py in reraise(self) 393 # (https://bugs.python.org/issue2651), so we work around it. 394 msg = KeyErrorMessage(msg) --> 395 raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/databricks/python/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 239, in forward output_hidden_states=output_hidden_states, File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 762, in forward output_hidden_states=output_hidden_states, File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 439, in forward output_attentions, File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 371, in forward hidden_states, attention_mask, head_mask, output_attentions=output_attentions, File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 315, in forward hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, output_attentions, File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 240, in forward attention_scores = attention_scores / math.sqrt(self.attention_head_size) RuntimeError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 11.17 GiB total capacity; 10.68 GiB already allocated; 95.31 MiB free; 10.77 GiB reserved in total by PyTorch) ` Please how do I resolve this
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6855/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6855/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6854
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6854/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6854/comments
https://api.github.com/repos/huggingface/transformers/issues/6854/events
https://github.com/huggingface/transformers/pull/6854
689,408,046
MDExOlB1bGxSZXF1ZXN0NDc2NDYyODk2
6,854
Fix marian slow test
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=h1) Report\n> Merging [#6854](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/61b7ba93f5f4dfcef795e20a9fb11b2d4ee7608e?el=desc) will **decrease** coverage by `0.13%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6854/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6854 +/- ##\n==========================================\n- Coverage 79.94% 79.80% -0.14% \n==========================================\n Files 157 157 \n Lines 28739 28739 \n==========================================\n- Hits 22974 22936 -38 \n- Misses 5765 5803 +38 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.85% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.96% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.45% <0.00%> (-0.40%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.71% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <0.00%> (+57.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=footer). Last update [61b7ba9...a83ab56](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
Fix slow failing test that depended on old seq2seq_batch logic.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6854/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6854", "html_url": "https://github.com/huggingface/transformers/pull/6854", "diff_url": "https://github.com/huggingface/transformers/pull/6854.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6854.patch", "merged_at": 1598904643000 }
https://api.github.com/repos/huggingface/transformers/issues/6853
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6853/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6853/comments
https://api.github.com/repos/huggingface/transformers/issues/6853/events
https://github.com/huggingface/transformers/issues/6853
689,402,686
MDU6SXNzdWU2ODk0MDI2ODY=
6,853
FAILED tests/test_modeling_marian.py::TestMarian_EN_DE_More::test_forward
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6853/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6852
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6852/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6852/comments
https://api.github.com/repos/huggingface/transformers/issues/6852/events
https://github.com/huggingface/transformers/pull/6852
689,381,590
MDExOlB1bGxSZXF1ZXN0NDc2NDQxMjQ0
6,852
Logging doc
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=h1) Report\n> Merging [#6852](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02d09c8fcc6bda2c345c84cec53289abbe7532ac?el=desc) will **increase** coverage by `1.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6852/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6852 +/- ##\n==========================================\n+ Coverage 79.01% 80.01% +1.00% \n==========================================\n Files 157 157 \n Lines 28739 28739 \n==========================================\n+ Hits 22707 22995 +288 \n+ Misses 6032 5744 -288 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/utils/logging.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy91dGlscy9sb2dnaW5nLnB5) | `75.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.40% <0.00%> (-0.18%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.56% <0.00%> (+2.98%)` | :arrow_up: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `84.44% <0.00%> (+20.00%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.66% <0.00%> (+25.00%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `53.23% <0.00%> (+40.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=footer). Last update [02d09c8...e45ca17](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "How do we get `transformers.logging.*`? There is either `transformers.utils.logging.*` or `logging.*` if the latter was imported.\r\n\r\nUnrelated, also has the default just changed from INFO to WARN? I rebased my copy and noticed this change. Ah, yes, it was https://github.com/huggingface/transformers/commit/4561f05c5fafc2b636a2fc1d0dded9057d439745", "You get `transformerts.logging.*` after doing `import transformers`. logging is imported in the project init, so there is no need to add the .utils.", "Ah, I see - the test I was working on was doing `from transformers import logging`. If we follow this in docs it leads to a shorter:\r\n\r\n`logging.set_verbosity(logging.INFO)`\r\n\r\nand it matches the actual `logging.INFO` from the logging package.\r\n\r\n.... but then `from transformers import logging` makes it hard to do `import logging`... same `logging` name. So then:\r\n\r\n```\r\nimport transformers\r\ntransformers.logging.set_verbosity(transformers.logging.INFO)`\r\n```\r\nwhile being quite verbose, has no collision with the normal `logging` package\r\n\r\nThank you for expanding the docs, @sgugger - this is awesome!", "Note that you have the shortcut\r\n```\r\ntransformers.logging.set_verbosity_info()\r\n```\r\nbut yes, importing logging directly will create a conflict with the logging module.", "You meant `transformers.logging.set_verbosity_{info|warning|...}` (must be a typo in `login` :)\r\n\r\nYes, this is good!", "Oops, fixed my comment." ]
1,598
1,598
1,598
COLLABORATOR
null
Adds documentation for the new centralized logger.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6852/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6852", "html_url": "https://github.com/huggingface/transformers/pull/6852", "diff_url": "https://github.com/huggingface/transformers/pull/6852.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6852.patch", "merged_at": 1598944595000 }
https://api.github.com/repos/huggingface/transformers/issues/6851
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6851/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6851/comments
https://api.github.com/repos/huggingface/transformers/issues/6851/events
https://github.com/huggingface/transformers/pull/6851
689,346,101
MDExOlB1bGxSZXF1ZXN0NDc2NDEyMTk5
6,851
Distill marian
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6851/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6851", "html_url": "https://github.com/huggingface/transformers/pull/6851", "diff_url": "https://github.com/huggingface/transformers/pull/6851.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6851.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6850
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6850/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6850/comments
https://api.github.com/repos/huggingface/transformers/issues/6850/events
https://github.com/huggingface/transformers/pull/6850
689,309,566
MDExOlB1bGxSZXF1ZXN0NDc2MzgyNDk2
6,850
move wandb/comet logger init to train() to allow parallel logging
{ "login": "krfricke", "id": 14904111, "node_id": "MDQ6VXNlcjE0OTA0MTEx", "avatar_url": "https://avatars.githubusercontent.com/u/14904111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/krfricke", "html_url": "https://github.com/krfricke", "followers_url": "https://api.github.com/users/krfricke/followers", "following_url": "https://api.github.com/users/krfricke/following{/other_user}", "gists_url": "https://api.github.com/users/krfricke/gists{/gist_id}", "starred_url": "https://api.github.com/users/krfricke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/krfricke/subscriptions", "organizations_url": "https://api.github.com/users/krfricke/orgs", "repos_url": "https://api.github.com/users/krfricke/repos", "events_url": "https://api.github.com/users/krfricke/events{/privacy}", "received_events_url": "https://api.github.com/users/krfricke/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=h1) Report\n> Merging [#6850](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b8e4906c974101d328bdd01245bc1695f9b07088?el=desc) will **increase** coverage by `0.17%`.\n> The diff coverage is `78.57%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6850/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6850 +/- ##\n==========================================\n+ Coverage 80.44% 80.61% +0.17% \n==========================================\n Files 161 161 \n Lines 30113 30119 +6 \n==========================================\n+ Hits 24224 24281 +57 \n+ Misses 5889 5838 -51 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `54.95% <78.57%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <0.00%> (+0.27%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=footer). Last update [b8e4906...a27cdd3](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I don't think you want to do the logger setup within `train` as users can call `Trainer` for evaluation only as well.\r\nIt probably needs to stay within `__init__` but should also go into a hyperparameter search function, maybe `_objective`.\r\n\r\nWhat's important is for loggers to setup at `__init__` and at each parameter search.\r\n\r\nHowever @sgugger will have a better idea in how to organize this function.", "Yes, the train method could be called several times on the same Trainer, or the Trainer could be used for evaluation only, and those logging platforms should be setup once only, so the init looks best. Maybe we could add a private attribute `_has_been_setup` that could be checked inside the log method before reporting to wandb/comet and call the setup method if needed? Would that work for the hp search with Ray?", "That sounds good. Should it still be setup in the init then? For hyperparameter search this doesn't really make sense (and creates an \"empty\" run in wandb), and if it is setup on logging calls anyway we wouldn't necessarily need it there. But happy to leave it there, too.", "We can leave the setup to the first time we try to log something or the first call to train then (if there is a check to the same flag, we can call the setup method several times safely).", "I think the first time we try to log makes sense, and also allow to use `Trainer` in eval only.\r\n\r\nIf people just want to call multiple times `train`, it would be nice if it was straightforward for them to choose between logging to the same run or logging to a new run. Hyperparameter search would obviously automatically choose to log to a new run.\r\n\r\nNote that logging several `train` calls to the same run is actually not currently supported due to `global_step` being reset to 0 [here](https://github.com/huggingface/transformers/blob/54cfefc2ac9e3e1c0968a2ed0dd3c711eee76196/src/transformers/trainer.py#L645) which will cause issues at least in both Tensorboard and W&B.", "I adjusted the PR so the loggers will be initialized on the first call to `log()`. Is this what you had in mind?", "Yes. I just think we should add the line to setup at the beginning of log, so that the loggers get initialized if we try to log something.", "Okay, so the current position is good? (When clicking the \"Files changed\" link it looks like it's in `_hp_search_setup`, but it's actually right at the beginning of `log`)", "Looks great!", "Oh yeah, sorry I looked too fast. LGTM!" ]
1,598
1,599
1,599
CONTRIBUTOR
null
Moving the logger setup to the `train()` function allows parallel runs (e.g. in hyperparameter search) to log each run individually. Alternative to #6791
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6850/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6850", "html_url": "https://github.com/huggingface/transformers/pull/6850", "diff_url": "https://github.com/huggingface/transformers/pull/6850.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6850.patch", "merged_at": 1599148155000 }
https://api.github.com/repos/huggingface/transformers/issues/6849
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6849/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6849/comments
https://api.github.com/repos/huggingface/transformers/issues/6849/events
https://github.com/huggingface/transformers/issues/6849
689,307,990
MDU6SXNzdWU2ODkzMDc5OTA=
6,849
Printing probabilities
{ "login": "Mahmedturk", "id": 48975334, "node_id": "MDQ6VXNlcjQ4OTc1MzM0", "avatar_url": "https://avatars.githubusercontent.com/u/48975334?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mahmedturk", "html_url": "https://github.com/Mahmedturk", "followers_url": "https://api.github.com/users/Mahmedturk/followers", "following_url": "https://api.github.com/users/Mahmedturk/following{/other_user}", "gists_url": "https://api.github.com/users/Mahmedturk/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mahmedturk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mahmedturk/subscriptions", "organizations_url": "https://api.github.com/users/Mahmedturk/orgs", "repos_url": "https://api.github.com/users/Mahmedturk/repos", "events_url": "https://api.github.com/users/Mahmedturk/events{/privacy}", "received_events_url": "https://api.github.com/users/Mahmedturk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, you have an example of how to do exactly this in the [documentation](https://huggingface.co/transformers/task_summary.html#sequence-classification):\r\n\r\n```py\r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\r\nimport torch\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased-finetuned-mrpc\")\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased-finetuned-mrpc\")\r\n\r\nclasses = [\"not paraphrase\", \"is paraphrase\"]\r\nsequence_0 = \"The company HuggingFace is based in New York City\"\r\nsequence_1 = \"Apples are especially bad for your health\"\r\nsequence_2 = \"HuggingFace's headquarters are situated in Manhattan\"\r\n\r\nparaphrase = tokenizer(sequence_0, sequence_2, return_tensors=\"pt\")\r\nnot_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors=\"pt\")\r\n\r\nparaphrase_classification_logits = model(**paraphrase).logits\r\nnot_paraphrase_classification_logits = model(**not_paraphrase).logits\r\n\r\nparaphrase_results = torch.softmax(paraphrase_classification_logits, dim=1).tolist()[0]\r\nnot_paraphrase_results = torch.softmax(not_paraphrase_classification_logits, dim=1).tolist()[0]\r\n\r\n# Should be paraphrase\r\nfor i in range(len(classes)):\r\n print(f\"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%\")\r\n\r\n# Should not be paraphrase\r\nfor i in range(len(classes)):\r\n print(f\"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%\")\r\n```" ]
1,598
1,599
1,599
NONE
null
Hi, I apologize if that's a stupid question. How can I print probabilities during inference produced by the softmax layer with BertForSequenceClassification?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6849/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6848
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6848/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6848/comments
https://api.github.com/repos/huggingface/transformers/issues/6848/events
https://github.com/huggingface/transformers/issues/6848
689,290,011
MDU6SXNzdWU2ODkyOTAwMTE=
6,848
unexpected behavior on RoBERTa tokenizer when using additional special tokens
{ "login": "amirdnc", "id": 9753154, "node_id": "MDQ6VXNlcjk3NTMxNTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9753154?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amirdnc", "html_url": "https://github.com/amirdnc", "followers_url": "https://api.github.com/users/amirdnc/followers", "following_url": "https://api.github.com/users/amirdnc/following{/other_user}", "gists_url": "https://api.github.com/users/amirdnc/gists{/gist_id}", "starred_url": "https://api.github.com/users/amirdnc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amirdnc/subscriptions", "organizations_url": "https://api.github.com/users/amirdnc/orgs", "repos_url": "https://api.github.com/users/amirdnc/repos", "events_url": "https://api.github.com/users/amirdnc/events{/privacy}", "received_events_url": "https://api.github.com/users/amirdnc/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
## Environment info - `transformers` version: 3.0.2 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.8.3 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <no - Using distributed or parallel set-up in script?: no ### Who can help tokenizers: @mfuntowicz ## Information When trying to tokenize using RoBERTa tokenized and a special token and using add_prefix_space=True, the token following the special token does not get a space. ## To reproduce Steps to reproduce the behavior: 1. run the following code ``` tokenizer = RobertaTokenizer.from_pretrained("roberta-base") tokenizer.add_special_tokens('[d_s]') print(tokenizer('[d_s] test', add_prefix_space=True)) print((tokenizer('test', add_prefix_space=True)) ``` ## output {'input_ids': [0, 1296, 2], 'attention_mask': [1, 1, 1]} {'input_ids': [0, 50271, 21959, 2], 'attention_mask': [1, 1, 1, 1]} ## Expected behavior {'input_ids': [0, 1296, 2], 'attention_mask': [1, 1, 1]} {'input_ids': [0, 50271, 1296, 2], 'attention_mask': [1, 1, 1, 1]} The second token should not change because of the first one.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6848/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6848/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6847
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6847/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6847/comments
https://api.github.com/repos/huggingface/transformers/issues/6847/events
https://github.com/huggingface/transformers/pull/6847
689,274,499
MDExOlB1bGxSZXF1ZXN0NDc2MzUzODMx
6,847
Fix resuming training for Windows
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,598
1,598
COLLABORATOR
null
Fixes #6720
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6847/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6847/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6847", "html_url": "https://github.com/huggingface/transformers/pull/6847", "diff_url": "https://github.com/huggingface/transformers/pull/6847.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6847.patch", "merged_at": 1598886151000 }
https://api.github.com/repos/huggingface/transformers/issues/6846
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6846/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6846/comments
https://api.github.com/repos/huggingface/transformers/issues/6846/events
https://github.com/huggingface/transformers/pull/6846
689,271,563
MDExOlB1bGxSZXF1ZXN0NDc2MzUxNDUy
6,846
Separate implementation for Torch-Scriptable BERT model
{ "login": "sbrody18", "id": 67021628, "node_id": "MDQ6VXNlcjY3MDIxNjI4", "avatar_url": "https://avatars.githubusercontent.com/u/67021628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sbrody18", "html_url": "https://github.com/sbrody18", "followers_url": "https://api.github.com/users/sbrody18/followers", "following_url": "https://api.github.com/users/sbrody18/following{/other_user}", "gists_url": "https://api.github.com/users/sbrody18/gists{/gist_id}", "starred_url": "https://api.github.com/users/sbrody18/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sbrody18/subscriptions", "organizations_url": "https://api.github.com/users/sbrody18/orgs", "repos_url": "https://api.github.com/users/sbrody18/repos", "events_url": "https://api.github.com/users/sbrody18/events{/privacy}", "received_events_url": "https://api.github.com/users/sbrody18/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Thanks a lot for doing this! It's a great way for us to realize the needed changes to get fully torch-scriptable models. Aside from the tests (we can help fix them once the design is approved), I'd love to see what parts we can reuse from bert (with potential non-harmful modifications) and what parts need to be rewritten because they're not compatible with the rest of our API.\r\n\r\nFor instance, the change in the embeddings layer is just a type annotation which we could do in bert (it would be a nice addition) and then import that layer. On the other hand, the whole parts with `return_dict` are probably fully incompatible with scripting.\r\n\r\nI guess in an ideal world, we would reuse the same internal layers from bert and only change the full models if that is possible.", "As you can see in a [previous comment on the thread](https://github.com/huggingface/transformers/issues/5067#issuecomment-662586999) my initial implementation tried to go the minimal-duplication route. I modified the original models to be scriptable, and then had a thin wrapper around them to transform the output into dictionary form.\r\nSo basically, you had BertScriptableModel returning a tuple of fixed size, and BertModel who's forward just ran BertScriptableModel and put the output in a dictionary, to keep the interface.\r\nThe main issue with that was that the code kept changing. Other than that, it should be doable.\r\n", "90% of the changes were type annotations, and assertions about Nullity (which would improve the code quality regardless).\r\nThe added bonus of the minimal duplication route is that it makes it easier to convert other models that use BERT components, e.g., Albert.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=h1) Report\n> Merging [#6846](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2de7ee0385bee4134ca894a208fa3a2aaf7d5371?el=desc) will **decrease** coverage by `0.85%`.\n> The diff coverage is `18.92%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6846/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6846 +/- ##\n==========================================\n- Coverage 80.20% 79.35% -0.86% \n==========================================\n Files 157 158 +1 \n Lines 28734 29257 +523 \n==========================================\n+ Hits 23047 23216 +169 \n- Misses 5687 6041 +354 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_scriptable\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19zY3JpcHRhYmxlX2JlcnQucHk=) | `18.92% <18.92%> (ø)` | |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `59.43% <0.00%> (-35.85%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.41% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (+0.66%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+1.42%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=footer). Last update [2de7ee0...a2c6c43](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Yes, I can see that clearly now. Sorry for going back and forth with you on this. We definitely want the type annotations in the main bert file, and I think the first implementation is better on that regard. It just misses the `return_dict` argument, which is easy to add with the way you designed things (happy to do it myself if you give me access to a branch).", "The previous implementation is at https://github.com/sbrody18/transformers/tree/scripting\r\nAs mentioned, it is a behind head, and still needs some work.\r\nI sent an invite for access to my repo. Let me know if there's a better way to share the branch.", "Yes, saw the invite and accepted it. I have some stuff to finish but it's on my TODO and I hope to be able to add the missing stuff before the end of the week. Do you prefer a PR or can I directly commit on this branch?", "No rush on my side.\r\nA PR might be better, to make it easier to comment, but direct is fine if that's too much trouble.", "Super cool PR!\r\nI can tweak our benchmarking tools a bit to get some numbers on speed improvements using your scriptable Bert model tomorrow", "@patrickvonplaten That would be great!\r\nThe major improvement is expected when running a large set of inputs with varying lengths, individually or in small batches (that's where not having to pad to max_length would come into play)", "> @patrickvonplaten That would be great!\r\n> The major improvement is expected when running a large set of inputs with varying lengths, individually or in small batches (that's where not having to pad to max_length would come into play)\r\n\r\nGot it!", "This is great, looking forward to this PR!", "Okey I did some benchmarking, which can be seen here: https://github.com/huggingface/transformers/pull/6907. \r\n\r\n@sbrody18 - it would be awesome if you could take a look if I am using the function correctly.", "Ok, after reviewing this PR and the other design in [this diff](https://github.com/huggingface/transformers/compare/clean_scripting?expand=1), along with @patrickvonplaten benchmark results in #6907 we've come to the conclusion that adding scriptable layers is a bit too much for almost no gain, since `script` and `trace` now have the same speed in PyTorch.\r\n\r\nAll type annotations and asserts are welcome additions on the other hand, if you want to suggest a PR with just those changes.", "Sure. Makes sense. I'll see if I can put one together, but other things might take priority.\r\nThanks for all the work you've put in to look into this.", "@sbrody18 - Thanks a lot for making us aware of this issue! I learned a lot about the differences between `torch.jit.trace` and `torch.jit.script` thanks to you!", "Yes thanks a lot for all your work on this, I learned a lot on scriptable pytorch modules thanks to the PR!", "I just wanted to point out that, IIUC, a big benefit of making everything scriptable is free reuse from languages other than Python (for example, from the C++ frontend). I know that the prescribed setup is to train in python, trace, then deploy at runtime with a traced TorchScript, but the freedom to train from C++, or even the JVM with a few extra bindings, is a pretty big win. ", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,598
1,614
1,614
NONE
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #5067 Separate (re-)implementation of BERT such that it can be used with TorchScript's script() method (not just trace). This allows for better/more reliable handling of internal model logic and removes the requirement for fixed input size, resulting in large speedups when average input size is << than max input size. Tests duplicate all the ones on the original BERT model, with the only fundamental difference being the test_torchscript* tests, which now use script() rather than trace().
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6846/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6846/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6846", "html_url": "https://github.com/huggingface/transformers/pull/6846", "diff_url": "https://github.com/huggingface/transformers/pull/6846.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6846.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6845
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6845/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6845/comments
https://api.github.com/repos/huggingface/transformers/issues/6845/events
https://github.com/huggingface/transformers/pull/6845
689,264,532
MDExOlB1bGxSZXF1ZXN0NDc2MzQ1NjUw
6,845
Fix in Adafactor docstrings
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=h1) Report\n> Merging [#6845](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2de7ee0385bee4134ca894a208fa3a2aaf7d5371?el=desc) will **decrease** coverage by `0.36%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6845/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6845 +/- ##\n==========================================\n- Coverage 80.20% 79.83% -0.37% \n==========================================\n Files 157 157 \n Lines 28734 28734 \n==========================================\n- Hits 23047 22941 -106 \n- Misses 5687 5793 +106 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `82.28% <ø> (ø)` | |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.41% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+1.42%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.63% <0.00%> (+7.18%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+10.95%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=footer). Last update [2de7ee0...9e2f5ef](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
COLLABORATOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6845/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6845", "html_url": "https://github.com/huggingface/transformers/pull/6845", "diff_url": "https://github.com/huggingface/transformers/pull/6845.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6845.patch", "merged_at": 1598885568000 }
https://api.github.com/repos/huggingface/transformers/issues/6844
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6844/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6844/comments
https://api.github.com/repos/huggingface/transformers/issues/6844/events
https://github.com/huggingface/transformers/issues/6844
689,259,666
MDU6SXNzdWU2ODkyNTk2NjY=
6,844
Pegasus: replication and distillation results
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649053, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted", "name": "Help wanted", "color": "008672", "default": false, "description": "Extra attention is needed, help appreciated" }, { "id": 2314010923, "node_id": "MDU6TGFiZWwyMzE0MDEwOTIz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Replication", "name": "Replication", "color": "bfdadc", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "If anyone wants to help, evaluate on a dataset where the third column is not filled it.\r\nSteps:\r\nFirst, download the data from nlp package, save to disk in format described in https://github.com/huggingface/transformers/blob/master/examples/seq2seq/download_wmt.py\r\n\r\nHelper function for run_eval\r\n```bash\r\ngen_test_hub_summ () {\r\n\t# need to add --fp16 and --bs = whatever\r\n model=$1\r\n DATA_DIR=$2\r\n echo $DATA_DIR\r\n\tsave_dir=$3\r\n\tmkdir -p $save_dir\r\n\tshift\r\n shift\r\n shift\r\n python run_eval.py $model $DATA_DIR/test.source $save_dir/test_gens.txt --reference_path $DATA_DIR/test.target --score_path $save_dir/test_rouge.json --task summarization $@\r\n}\r\n\r\n```\r\nThen Roughly:\r\n```\r\ncd examples/seq2seq\r\ngen_test_hub_summ google/pegasus-{dataset} dataset {dataset}_results --bs 4\r\n```\r\n\r\nLeave the results, as well as any observations about truncation produced summaries as a comment in this issue!\r\n", "### CNN Dailymail\r\n\r\nOne possible reason for replication issue is that our beam search logic differs from the original, causing 16% of the summaries to be truncated.\r\n\r\nFinetuning with our finetuning code and `--max_target_length=142` partially fixes this issue:\r\n+ Can get a distilled version (16-4) `43.23/21.29/31.3` .436 S/sample (released at `sshleifer/dpx-cnn-16-4`)\r\n+ Can finetune the 16-16 pegasus-cnn checkpoint to get `44.13/21.37/30.94` 1.4S/Sample (0.2 Rouge2 behind published.) ( `sshleifer/pegasus-cnn-ft-v2`)\r\n+ original google/pegasus-cnn_dailymail scored 20.73 Rouge 2.\r\n+ For both of these finetuned models, >99.8% of generations end in punctuation.\r\n\r\n\r\n### XSUM\r\n\r\n`sshleifer/distill-pegasus-xsum-16-4`\r\n```\r\n{\"rouge1\": 44.942, \"rouge2\": 23.0412, \"rougeL\": 37.8579,\r\n \"n_obs\": 11333, \"seconds_per_sample\": 0.1972, \"batch_size\": 16}\r\n```\r\n\r\nTeacher metrics (I don't remember batch size):\r\n```\r\n{\"rouge1\": 46.8773, \"rouge2\": 24.46, \"rougeL\": 39.1507, \r\n\"n_obs\": 11328, \"seconds_per_sample\": 0.3308}\r\n```\r\n", "I intend to post a writeup on distillation techniques at some point before Oct 15!", "Re: replication, best download strategy maybe to start with\r\nhttps://github.com/google-research/pegasus/blob/master/pegasus/data/public_datasets_test.py and modify.", "Cnn update: \r\n- I believe we have a preprocessing issue. Ported models generate the `<n>` token at the beginning of sentences, whereas ours do not. The pegasus original code replaces newline symbol with `<n>`. `PegasusTokenizer` should probably do this: https://github.com/huggingface/transformers/issues/7327", "For CNNDM, I can get this score with `google/pegasus-cnn_dailymail` model.\r\n``` \r\nROUGE-1:\r\nrouge_1_f_score: 0.4436 with confidence interval (0.4413, 0.4459)\r\nrouge_1_recall: 0.4825 with confidence interval (0.4797, 0.4853)\r\nrouge_1_precision: 0.4368 with confidence interval (0.4339, 0.4395)\r\n\r\nROUGE-2:\r\nrouge_2_f_score: 0.2145 with confidence interval (0.2120, 0.2170)\r\nrouge_2_recall: 0.2323 with confidence interval (0.2297, 0.2350)\r\nrouge_2_precision: 0.2124 with confidence interval (0.2097, 0.2150)\r\n\r\nROUGE-l:\r\nrouge_l_f_score: 0.4141 with confidence interval (0.4118, 0.4165)\r\nrouge_l_recall: 0.4501 with confidence interval (0.4474, 0.4530)\r\nrouge_l_precision: 0.4079 with confidence interval (0.4051, 0.4106)\r\n```\r\nScript I run:\r\n```\r\n./run_eval.py google/pegasus-cnn_dailymail /home/ffajri/Data/huggingface/cnn_dm/test.source pred_cnndm_pegasus.txt \\\r\n --reference_path /home/ffajri/Data/huggingface/cnn_dm/test.target \\\r\n --score_path cnn_rouge.json \\\r\n --task summarization \\\r\n --device cuda \\\r\n --max_source_length 512 \\\r\n --max_target_length 128 \\\r\n --bs 4\r\n```\r\nI notice the first R1 output from the transformer is 43.xx something, but I recalculate ROUGE (to get the scores above) as follows:\r\n1) First, I replace `<n>` with `\\n` in the decoding results. (as you said above)\r\n2) I don't use the gold summary provided by `huggingface` because sentences are not separated by the newline character. I think its necessary to separate sentences in the gold summary. So I use the original gold test set from See et al., 2017 to compute ROUGE.\r\n2) I lower case all decoded and gold summary (but not sure if it really affects the ROUGE score)\r\n3) I calculate ROUGE with the `pyrouge` code (not the ROUGE in transformer)\r\n\r\nHope it can help the fix. \r\n", "Would you be willing to share a few lines of \r\n\r\n`cnn_dm/test.source`, `pred_cnndm_pegasus.txt`, and `cnn_dm/test.target`\r\n\r\nThanks!", "Hi, for inference, I use the same set from `huggingface`\r\n\r\n**`test.source`**\r\n``\r\nMarseille, France (CNN)The French prosecutor leading an investigation into the crash of Germanwings Flight 9525 insisted Wednesday that he was not aware of any video footage from on board the plane. Marseille prosecutor Brice Robin told CNN that \"so far no videos were used in the crash investigation.\" He added, \"A person who has such a video needs to immediately give it to the investigators.\" ............\r\n``\r\n\r\n**`test.target`**\r\n``\r\nMarseille prosecutor says \"so far no videos were used in the crash investigation\" despite media reports . Journalists at Bild and Paris Match are \"very confident\" the video clip is real, an editor says . Andreas Lubitz had informed his Lufthansa training school of an episode of severe depression, airline says .\r\n``\r\n\r\n**`pred_cnndm_pegasus.txt`** (Result)\r\n``\r\n\"A person who has such a video needs to immediately give it to the investigators,\" prosecutor says .<n>\"It is a very disturbing scene,\" editor-in-chief of Bild online tells \"Erin Burnett: Outfront\"\r\n``\r\n\r\nThen, I got R1 = 43.xx (as the `./run_eval.py` output)\r\n\r\nTo get the R1 = 44.xx, I separately calculate ROUGE (pyrouge) with:\r\n\r\n**`test.target`** from [See et al., 2017 ](https://github.com/abisee/pointer-generator)\r\n``\r\nmarseille prosecutor says '' so far no videos were used in the crash investigation '' despite media reports .\\njournalists at bild and paris match are '' very confident '' the video clip is real , an editor says .\\nandreas lubitz had informed his lufthansa training school of an episode of severe depression , airline says .\r\n``\r\n\r\n_updated_ **`pred_cnndm_pegasus.txt`**\r\n``\r\n\"a person who has such a video needs to immediately give it to the investigators,\" prosecutor says .\\n\"it is a very disturbing scene,\" editor-in-chief of bild online tells \"erin burnett: outfront\"\r\n``\r\n\r\nBoth now have `\\n` which I think is necessary for calculating ROUGE.", "We fixed our `calculate_rouge_score` to address the `\\n` issue and now we are getting\r\n\r\n44.31/21.53/41.15 for `sshleifer/pegasus-cnn-ft-v2`! Thanks for the help!\r\n\r\n\r\n", "Updated the table in the Issue description with most recent results after the `calculate_rouge_fix` \r\nMoving forward, questions about specific results should be asked on the forums or in a separate issue with @stas00, @patil-suraj, and @sshleifer tagged.", "hi guys : \r\n\r\nis there code to pretrainning the model used for my own data .\r\nThank you \r\n \r\n ", "Thank you for reproducing this results! \r\nRegarding the treatment of the \\<n\\>, newline char \"\\n\" in input text are being replaced by \"\\<n\\>\" and vice versa for the output.", "I have tried around 10 sets of hyperparameters and only achieved nearly worse results. (ROUGE-1 ~ 43.9, for CNN/DailyMail) These are options of my experiments:\r\n\r\n- Optimizer: Adafactor <-> AdamW\r\n- Learning rate: 5e-4 <-> 1e-4\r\n- Batch size: 4\r\n- Gradient accumulation steps: 1 <-> 8 <-> 64\r\n- Accelarator: dp <-> ddp\r\n- Epochs: 20 - 80 (after around 10 epochs it started to overfit (val loss increases))\r\n- Datasets: both old and new versions (old version doesn't consist of \r\n\\<n\\> in the target summary)\r\n\r\nI don't know what to continue, can someone tell me what my problems are?", "Hi @thongnguyen050999 \r\n\r\nSee if this comment above helps \r\nhttps://github.com/huggingface/transformers/issues/6844#issuecomment-699499846", "Hi @patil-suraj,\r\n\r\nYes, I did notice that, these are my results:\r\n\r\n- Sentence ends with \"\\<n\\>\": ROUGE-1: 45.94, ROUGE-L: 32.24\r\n- Sentence ends with \"\\\\n\": ROUGE-1: 43.96, ROUGE-L: 40.87", "Are my results reasonable (representing the expected outcome)? :-) ", "> Are my results reasonable (representing the expected outcome)? :-)\r\n\r\nHi, can you please tell me a bit about what do you want to achieve? and which pre-trained Pegasus model are you currently using? It seems to me you are not doing only inference but some fine-tuning of the Pegasus model (based on your hyperparameter)?\r\n", "Yes, here is my experiment description:\r\n\r\n- Goal: I want to reproduce the results from the Pegasus paper (in the future I might add some changes based upon the baseline 🧑‍🎓 ), in which I finetuned from the pretrained checkpoint\r\n- Pretrained model I use: google/pegasus-large ", "I guess `google/pegasus-large` in `huggingface` is a Mixed & Stochastic model where we expect to have 44.16/21.56/41.30 (which is slightly lower than your current score).\r\n\r\nHave you tried to set the hyperparameter of the original implementation? You can check it [here]( https://github.com/google-research/pegasus/blob/939830367bcf411193d2b5eca2f2f90f3f9260ca/pegasus/params/public_params.py).\r\n\r\nThe primary hyperparameter will be this:\r\n\"max_input_len\": 1024, --> (longer text)\r\n\"max_output_len\": 128,\r\n\"train_steps\": 210000,\r\n\"learning_rate\": 0.001,\r\n \"batch_size\": 8,\r\n\r\nYou probably want to follow their hyperparameter for inference as well (e.g. beam size etc)", "Hi @fajri91, I have tried your suggestion and achieved the following results after 210k steps:\r\n\r\n- Huggingface version:\r\n+ ROUGE-1 = 43.2011\r\n+ ROUGE-L = 39.99\r\n\r\n- Google version (I ran their default code without modifications)\r\n+ ROUGE-1 = 43.01\r\n+ ROUGE-L = 39.92", "> ### Replication\r\n> [link](https://github.com/google-research/pegasus)\r\n> \r\n> mixed & stochastic column of this [table](https://github.com/google-research/pegasus#results-update)\r\n> \r\n> dataset\tAuthors\tThis Repo\tbest bart\tbest bart name\r\n> xsum\t47.60/24.83/39.64\t46.87/24.46/39.15\t22.32/37.39\tdistilbart-xsum-12-6\r\n> cnn_dailymail\t44.16/21.56/41.30\tsee comment\t21.26/30.59\tdistilbart-cnn-12-6\r\n> newsroom\t45.07/33.39/41.28\t41.03/29.83/36.96\t\t\r\n> multi_news\t47.65/18.75/24.95\t47.58/19.0/24.77\t\t\r\n> gigaword\t39.65/20.47/36.76\t39.79/20.56/36.80\t\t\r\n> wikihow\t46.39/22.12/38.41 *\t46.85/23.64/28.73\t\t\r\n> reddit_tifu\t27.99/9.81/22.94\t32.75/11.68/24.97\t\t\r\n> big_patent\t52.29/33.08/41.66 *\t\t\t\r\n> arxiv\t44.21/16.95/25.67\t44.83/17.34/25.60\t\t\r\n> pubmed\t45.97/20.15/28.25\t45.40/19.42/26.93\t\t\r\n> aeslc\t37.68/21.25/36.51\t37.09/21.40/35.93\t\t\r\n> billsum\t59.67/41.58/47.59\t56.18/39.94/45.39\t\t\r\n> * (* (authors footnote)) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data\r\n> \r\n> #### Final Update (2020-10-16)\r\n> Mission accomplished thanks to the work of @patil-suraj, and @stas00 !\r\n> \r\n> The above table now shows that our results are close enough. We suspect differences are due to treatment of the `<n>` character that pegasus generates and slightly different beam search implementations.\r\n> \r\n> [Link to Spreadsheet with timing data](https://docs.google.com/spreadsheets/d/1ODfoK-tXOV6TLXDMnujdGLtFhA8oVTy-Cv6Ib6qKgWk/edit?usp=sharing)\r\n> \r\n> Questions about specific results should be asked on the forums/separate issues with @stas00, @patil-suraj, and @sshleifer tagged.\r\n\r\nHi Sam, I have a quick question regarding to obtain the results for Gigaword using checkpoint \"google/pegasus-gigaword\" provided by Google. Currently, I followed a very simple setup using \"google/pegasus-gigaword\" and follow directly from huggingface default codes in generating gigaword summary. For dataset, I directly load 'gigaword' from datasets library without pre-processing. I currently use rouge_score library to compute the rouge score. However, my results evaluating on 1951 test samples in Gigaword deviates almost 10 rouge points (rouge1, rouge2, rougel: 28, 12 and 25 vs 39.79/20.56/36.80). Is it OK if you can share your setup in reproducing your experiment.\r\n\r\nThanks in advance!\r\n" ]
1,598
1,651
1,602
CONTRIBUTOR
null
### Replication [link](https://github.com/google-research/pegasus) mixed & stochastic column of this [table](https://github.com/google-research/pegasus#results-update) | dataset | Authors| This Repo| best bart | best bart name | ---- | ----|----|----|----| | xsum | 47.60/24.83/39.64| 46.87/24.46/39.15|22.32/37.39|distilbart-xsum-12-6| | cnn_dailymail | 44.16/21.56/41.30| see comment|21.26/30.59|distilbart-cnn-12-6| | newsroom | 45.07/33.39/41.28 |41.03/29.83/36.96| | multi_news | 47.65/18.75/24.95|47.58/19.0/24.77| | gigaword | 39.65/20.47/36.76|39.79/20.56/36.80| | wikihow | 46.39/22.12/38.41 *|46.85/23.64/28.73| | reddit_tifu | 27.99/9.81/22.94|32.75/11.68/24.97| | big_patent |52.29/33.08/41.66 *| | arxiv | 44.21/16.95/25.67|44.83/17.34/25.60| | pubmed | 45.97/20.15/28.25|45.40/19.42/26.93| | aeslc | 37.68/21.25/36.51|37.09/21.40/35.93| | billsum | 59.67/41.58/47.59|56.18/39.94/45.39| + (* (authors footnote)) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data #### Final Update (2020-10-16) Mission accomplished thanks to the work of @patil-suraj, and @stas00 ! The above table now shows that our results are close enough. We suspect differences are due to treatment of the `<n>` character that pegasus generates and slightly different beam search implementations. [Link to Spreadsheet with timing data](https://docs.google.com/spreadsheets/d/1ODfoK-tXOV6TLXDMnujdGLtFhA8oVTy-Cv6Ib6qKgWk/edit?usp=sharing) Questions about specific results should be asked on the forums/separate issues with @stas00, @patil-suraj, and @sshleifer tagged.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6844/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6843
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6843/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6843/comments
https://api.github.com/repos/huggingface/transformers/issues/6843/events
https://github.com/huggingface/transformers/pull/6843
689,174,724
MDExOlB1bGxSZXF1ZXN0NDc2MjcxMzA2
6,843
Adding another translation example
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=h1) Report\n> Merging [#6843](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2de7ee0385bee4134ca894a208fa3a2aaf7d5371?el=desc) will **decrease** coverage by `2.38%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6843/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6843 +/- ##\n==========================================\n- Coverage 80.20% 77.82% -2.39% \n==========================================\n Files 157 157 \n Lines 28734 28734 \n==========================================\n- Hits 23047 22362 -685 \n- Misses 5687 6372 +685 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-79.30%)` | :arrow_down: |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-0.76%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.40% <0.00%> (+0.34%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=footer). Last update [2de7ee0...52b62c3](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Stale" ]
1,598
1,604
1,604
CONTRIBUTOR
null
- IWSLT 2017 (should be added to `nlp` via https://github.com/huggingface/nlp/pull/470#issue-462074344. (Currently coded file here) - NL-EN, should hopefully work for any language pair (Could be checked). - Training loop in lightning, poor training logging, notably no translation example which are probably important.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6843/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6843", "html_url": "https://github.com/huggingface/transformers/pull/6843", "diff_url": "https://github.com/huggingface/transformers/pull/6843.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6843.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6842
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6842/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6842/comments
https://api.github.com/repos/huggingface/transformers/issues/6842/events
https://github.com/huggingface/transformers/issues/6842
689,115,028
MDU6SXNzdWU2ODkxMTUwMjg=
6,842
Update `convert_bart` script to allow loading nonstandard model architectures and custom pretrained Fairseq models.
{ "login": "tomsherborne", "id": 14322875, "node_id": "MDQ6VXNlcjE0MzIyODc1", "avatar_url": "https://avatars.githubusercontent.com/u/14322875?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomsherborne", "html_url": "https://github.com/tomsherborne", "followers_url": "https://api.github.com/users/tomsherborne/followers", "following_url": "https://api.github.com/users/tomsherborne/following{/other_user}", "gists_url": "https://api.github.com/users/tomsherborne/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomsherborne/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomsherborne/subscriptions", "organizations_url": "https://api.github.com/users/tomsherborne/orgs", "repos_url": "https://api.github.com/users/tomsherborne/repos", "events_url": "https://api.github.com/users/tomsherborne/events{/privacy}", "received_events_url": "https://api.github.com/users/tomsherborne/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,604
1,604
NONE
null
**edit 30/10/20** - this approach doesn't work anymore given the differences in interface from when I submitted it it now (mostly because of Hydra configs in fairseq). Closed. # 🚀 Feature request Allow the `convert_bart` script to load custom BART pre-trained models from fairseq. ## Motivation The current BART conversion script to migrate from Fairseq to Huggingface assumes that you are using predefined shape/parameter values from a BART model. There might be a usecase where you want to experiment with your own Seq2Seq pretraining and train a smaller BART model (i.e. cutting dimensions by 1/3rd) because of small datasets/pruning/some other reason. There presently isn't a dedicated Fairseq tutorial on training your own BART models but it is possible using the "denoising" task and the appropriate parameters (e.g. [preprocess](https://gist.github.com/tomsherborne/ab1a5a28f9d843cf633d6f7843e96a63) and [train](https://gist.github.com/tomsherborne/ae3529375b7a538a1b03a53f34850234)). This is what I have been trying with a small dataset and a smaller BART model. Then if you want to use this model within `transformers` - the conversion script doesn't work because (a) `load_xsum_checkpoint` assumes your model should be the shape of `bart.large.cnn` and rejects the incorrect tensor sizes and (b) an additional weight `decoder.output_projection.weight` needs to be ignored. To convert correctly to `transformers` and pass all the tests - I found you can switch from `torch.hub.load` to `fairseq.models.bart.BARTModel.from_pretrained` which calls `torch.hub` internally and gives you the same output model. This means you can convert local, custom BART models into `transformers` models and use them in downstream tasks or upload them to the models archive. ## Your contribution I made my own version of `convert_bart` [here](https://gist.github.com/tomsherborne/e7b629ee9cf0618febb211683a410ce5) which passes the scripts own tests. This could be improved/refactored if this feature is still useful. There could be a better way to do all this that I have missed. There's a hack because I call `torch.load` and `from_pretrained()` on the same weights and I'm not sure if they are both needed. The execution is bit ugly because I set `hf_config` as a path to a JSON file created from `BartConfig.from_pretrained()` and then manually adjusting to the new model size. ``` SCRIPT_PATH="${HOME}/ed/sp/transformers/src/transformers/convert_bart_original_pytorch_checkpoint_to_pytorch.py" TEMPLATE_ARCHIVE=""/Users/tom/ed/sp/pretrain/runs/fairseq/bart_base_enqa_small/bart-mini.tar.gz"" CHECKPOINT_PATH="${HOME}/ed/sp/pretrain/runs/fairseq/bart_base_enqa_small/ckpt/checkpoint_best.pt" DUMP_PATH="${HOME}/ed/sp/pretrain/runs/hf_fromfairseq/bart-mini" CONFIG_PATH="${HOME}/ed/sp/pretrain/config/hf_bart_configs/bart-mini/config.json" source ~/miniconda3/bin/activate $ENV_NAME python $SCRIPT_PATH --hf_config $CONFIG_PATH --model_template $TEMPLATE_ARCHIVE $CHECKPOINT_PATH $DUMP_PATH ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6842/reactions", "total_count": 3, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6842/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6841
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6841/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6841/comments
https://api.github.com/repos/huggingface/transformers/issues/6841/events
https://github.com/huggingface/transformers/pull/6841
689,000,312
MDExOlB1bGxSZXF1ZXN0NDc2MTMwMTA2
6,841
TF Flaubert w/ pre-norm
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=h1) Report\n> Merging [#6841](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/05c3214153d30245928279724ce2a9b701ec8aab?el=desc) will **decrease** coverage by `0.10%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6841/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6841 +/- ##\n==========================================\n- Coverage 80.27% 80.16% -0.11% \n==========================================\n Files 157 157 \n Lines 28586 28586 \n==========================================\n- Hits 22946 22916 -30 \n- Misses 5640 5670 +30 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6841/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <ø> (ø)` | |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6841/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6841/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6841/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.01%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6841/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=footer). Last update [05c3214...f863f8e](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
MEMBER
null
#5614 fixed TF Flaubert without pre-norm, but didn't fix it with the pre-norm. Fixes #6084
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6841/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6841", "html_url": "https://github.com/huggingface/transformers/pull/6841", "diff_url": "https://github.com/huggingface/transformers/pull/6841.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6841.patch", "merged_at": 1598864001000 }
https://api.github.com/repos/huggingface/transformers/issues/6840
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6840/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6840/comments
https://api.github.com/repos/huggingface/transformers/issues/6840/events
https://github.com/huggingface/transformers/pull/6840
688,984,276
MDExOlB1bGxSZXF1ZXN0NDc2MTE3ODk3
6,840
Model cards for loodos
{ "login": "hakanozgur", "id": 66843134, "node_id": "MDQ6VXNlcjY2ODQzMTM0", "avatar_url": "https://avatars.githubusercontent.com/u/66843134?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hakanozgur", "html_url": "https://github.com/hakanozgur", "followers_url": "https://api.github.com/users/hakanozgur/followers", "following_url": "https://api.github.com/users/hakanozgur/following{/other_user}", "gists_url": "https://api.github.com/users/hakanozgur/gists{/gist_id}", "starred_url": "https://api.github.com/users/hakanozgur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hakanozgur/subscriptions", "organizations_url": "https://api.github.com/users/hakanozgur/orgs", "repos_url": "https://api.github.com/users/hakanozgur/repos", "events_url": "https://api.github.com/users/hakanozgur/events{/privacy}", "received_events_url": "https://api.github.com/users/hakanozgur/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
Adds cards for 6 models in the ./models/loodos directory. albert-base-turkish-uncased bert-base-turkish-uncased electra-base-turkish-uncased electra-base-turkish-64k-uncased-discriminator electra-small-turkish-cased-discriminator electra-small-turkish-uncased-discriminator
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6840/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6840", "html_url": "https://github.com/huggingface/transformers/pull/6840", "diff_url": "https://github.com/huggingface/transformers/pull/6840.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6840.patch", "merged_at": 1598996124000 }
https://api.github.com/repos/huggingface/transformers/issues/6839
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6839/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6839/comments
https://api.github.com/repos/huggingface/transformers/issues/6839/events
https://github.com/huggingface/transformers/issues/6839
688,954,217
MDU6SXNzdWU2ODg5NTQyMTc=
6,839
pegasus-large: Can we have input text descriptions more than the maximum input length of 512?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes ! https://discuss.huggingface.co/t/pegasus-questions/838/8?u=valhalla", "Thanks " ]
1,598
1,598
1,598
CONTRIBUTOR
null
As mentioned pegasus paper, authors used sinusoidal positional encoding to make sure the model can work with inputs longer than default 512? So with huggingface implementation can we use inputs with longer lengths?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6839/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6838
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6838/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6838/comments
https://api.github.com/repos/huggingface/transformers/issues/6838/events
https://github.com/huggingface/transformers/pull/6838
688,868,836
MDExOlB1bGxSZXF1ZXN0NDc2MDIxMzQ4
6,838
fix typo in comments (modeling_bert)
{ "login": "prajjwal1", "id": 24690051, "node_id": "MDQ6VXNlcjI0NjkwMDUx", "avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prajjwal1", "html_url": "https://github.com/prajjwal1", "followers_url": "https://api.github.com/users/prajjwal1/followers", "following_url": "https://api.github.com/users/prajjwal1/following{/other_user}", "gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}", "starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions", "organizations_url": "https://api.github.com/users/prajjwal1/orgs", "repos_url": "https://api.github.com/users/prajjwal1/repos", "events_url": "https://api.github.com/users/prajjwal1/events{/privacy}", "received_events_url": "https://api.github.com/users/prajjwal1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,599
1,599
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6838/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6838/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6838", "html_url": "https://github.com/huggingface/transformers/pull/6838", "diff_url": "https://github.com/huggingface/transformers/pull/6838.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6838.patch", "merged_at": 1599044137000 }
https://api.github.com/repos/huggingface/transformers/issues/6837
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6837/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6837/comments
https://api.github.com/repos/huggingface/transformers/issues/6837/events
https://github.com/huggingface/transformers/issues/6837
688,840,242
MDU6SXNzdWU2ODg4NDAyNDI=
6,837
tokenization_gpt2 save vocabulary is not saving special tokens
{ "login": "MHDBST", "id": 6802945, "node_id": "MDQ6VXNlcjY4MDI5NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/6802945?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MHDBST", "html_url": "https://github.com/MHDBST", "followers_url": "https://api.github.com/users/MHDBST/followers", "following_url": "https://api.github.com/users/MHDBST/following{/other_user}", "gists_url": "https://api.github.com/users/MHDBST/gists{/gist_id}", "starred_url": "https://api.github.com/users/MHDBST/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MHDBST/subscriptions", "organizations_url": "https://api.github.com/users/MHDBST/orgs", "repos_url": "https://api.github.com/users/MHDBST/repos", "events_url": "https://api.github.com/users/MHDBST/events{/privacy}", "received_events_url": "https://api.github.com/users/MHDBST/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! The `save_vocabulary` method, as its name implies and as is explained in its [docstring](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.save_vocabulary), only saves the vocabulary. If you want to save the entire tokenizer (with special tokens), you should use the `save_pretrained` method." ]
1,598
1,598
1,598
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.8.0 - Platform: linux - Python version: 3.6.10 - PyTorch version (GPU?): GPU - Tensorflow version (GPU?): - - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: the official example scripts: (give details below) The tasks I am working on is: my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Load a pre-trained tokenizer 2. Add your intended special tokens: `tokenizer.add_tokens(SPECIAL_TOKENS_LIST)` 3. Save your tokenizer's vocabulary with: `tokenizer.save_vocabulary(PATH)` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior With the current `save_vocabulary` function, we are just saving the predefined tokens: `with open(vocab_file, "w", encoding="utf-8") as f: f.write(json.dumps(self.encoder, ensure_ascii=False))` This line should be modified as follow to save the special tokens as well: `vocab_with_special_tokens = dict(self.encoder, **self.added_tokens_encoder) with open(vocab_file, "w", encoding="utf-8") as f: f.write(json.dumps(vocab_with_special_tokens, ensure_ascii=False))` <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6837/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6836
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6836/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6836/comments
https://api.github.com/repos/huggingface/transformers/issues/6836/events
https://github.com/huggingface/transformers/issues/6836
688,832,341
MDU6SXNzdWU2ODg4MzIzNDE=
6,836
RAM MemoryError
{ "login": "shizhediao", "id": 18120087, "node_id": "MDQ6VXNlcjE4MTIwMDg3", "avatar_url": "https://avatars.githubusercontent.com/u/18120087?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shizhediao", "html_url": "https://github.com/shizhediao", "followers_url": "https://api.github.com/users/shizhediao/followers", "following_url": "https://api.github.com/users/shizhediao/following{/other_user}", "gists_url": "https://api.github.com/users/shizhediao/gists{/gist_id}", "starred_url": "https://api.github.com/users/shizhediao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shizhediao/subscriptions", "organizations_url": "https://api.github.com/users/shizhediao/orgs", "repos_url": "https://api.github.com/users/shizhediao/repos", "events_url": "https://api.github.com/users/shizhediao/events{/privacy}", "received_events_url": "https://api.github.com/users/shizhediao/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Do you solve the problem?......", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,608
1,608
NONE
null
# ❓ Questions & Help I was wondering is the RAM MemoryError issue solved? I encountered this issue because my 128 GB RAM memory could not load all 48 GB data. There are some discussions before, such as #6636, #3388, #1290, #4009 However, I don't see there is lazyDataLoader right now. Could you provide any hints about how to deal with a large dataset? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6836/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6836/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6835
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6835/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6835/comments
https://api.github.com/repos/huggingface/transformers/issues/6835/events
https://github.com/huggingface/transformers/issues/6835
688,809,082
MDU6SXNzdWU2ODg4MDkwODI=
6,835
DistributedSortishSampler
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649053, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted", "name": "Help wanted", "color": "008672", "default": false, "description": "Extra attention is needed, help appreciated" }, { "id": 1936351150, "node_id": "MDU6TGFiZWwxOTM2MzUxMTUw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Examples", "name": "Examples", "color": "d4c5f9", "default": false, "description": "Which is related to examples in general" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[]
1,598
1,599
1,599
CONTRIBUTOR
null
In examples/seq2seq/finetune.py, `--sortish_sampler --gpus 2` will raise an assertion error, but if you remove the assert, it will raise another error. Ideally we should make a Seq2SeqDataset.get_distributed_sortish_sampler method and use it in the relevant case.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6835/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6834
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6834/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6834/comments
https://api.github.com/repos/huggingface/transformers/issues/6834/events
https://github.com/huggingface/transformers/pull/6834
688,792,953
MDExOlB1bGxSZXF1ZXN0NDc1OTY0NTQ4
6,834
[s2s] distill: --normalize_hidden --supervise_forward
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=h1) Report\n> Merging [#6834](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c5d43a872f0e85ce069e921c5bda02374e5b9cbf?el=desc) will **decrease** coverage by `2.98%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6834/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6834 +/- ##\n==========================================\n- Coverage 80.02% 77.04% -2.99% \n==========================================\n Files 161 161 \n Lines 30120 30120 \n==========================================\n- Hits 24104 23205 -899 \n- Misses 6016 6915 +899 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.03% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.41% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `90.00% <0.00%> (+5.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <0.00%> (+30.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=footer). Last update [c5d43a8...7325ecf](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,599
1,599
CONTRIBUTOR
null
This appears to help pegasus and marian distillation. running bart-large-xsum-12-3 baseline. - No impact on bart-large-xsum distillation. - +1 BLEU for Marian - +20 ROUGE for pegasus (impossible to do anything without.) - verified that the `torch.stack` math is identical to the old for loop math.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6834/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6834", "html_url": "https://github.com/huggingface/transformers/pull/6834", "diff_url": "https://github.com/huggingface/transformers/pull/6834.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6834.patch", "merged_at": 1599242757000 }
https://api.github.com/repos/huggingface/transformers/issues/6833
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6833/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6833/comments
https://api.github.com/repos/huggingface/transformers/issues/6833/events
https://github.com/huggingface/transformers/pull/6833
688,791,906
MDExOlB1bGxSZXF1ZXN0NDc1OTYzNzc2
6,833
[s2s] command line args for faster val steps
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=h1) Report\n> Merging [#6833](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dfa10a41ba3fd9c5289bebd3baeff8792b1b2281?el=desc) will **decrease** coverage by `1.18%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6833/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6833 +/- ##\n==========================================\n- Coverage 80.02% 78.84% -1.19% \n==========================================\n Files 157 157 \n Lines 28586 28586 \n==========================================\n- Hits 22876 22538 -338 \n- Misses 5710 6048 +338 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.21% <0.00%> (-40.45%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `64.44% <0.00%> (-20.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: |\n| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=footer). Last update [dfa10a4...14cdaee](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
cc @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6833/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6833", "html_url": "https://github.com/huggingface/transformers/pull/6833", "diff_url": "https://github.com/huggingface/transformers/pull/6833.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6833.patch", "merged_at": 1598904971000 }
https://api.github.com/repos/huggingface/transformers/issues/6832
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6832/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6832/comments
https://api.github.com/repos/huggingface/transformers/issues/6832/events
https://github.com/huggingface/transformers/issues/6832
688,785,103
MDU6SXNzdWU2ODg3ODUxMDM=
6,832
Model.fit on GPT2 and TPUs
{ "login": "alexorona", "id": 11825654, "node_id": "MDQ6VXNlcjExODI1NjU0", "avatar_url": "https://avatars.githubusercontent.com/u/11825654?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexorona", "html_url": "https://github.com/alexorona", "followers_url": "https://api.github.com/users/alexorona/followers", "following_url": "https://api.github.com/users/alexorona/following{/other_user}", "gists_url": "https://api.github.com/users/alexorona/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexorona/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexorona/subscriptions", "organizations_url": "https://api.github.com/users/alexorona/orgs", "repos_url": "https://api.github.com/users/alexorona/repos", "events_url": "https://api.github.com/users/alexorona/events{/privacy}", "received_events_url": "https://api.github.com/users/alexorona/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Humm for me it looks like it is an issue with the dataset creation, but I might be wrong as I don't have the code that creates the features.\r\n\r\nCan you try without the `steps_per_epoch` parameter?", "isn't steps_per_epoch equal to `num_examples/batch_size` ? futhermore your labels first dimention (batch) and dataset batch dimention must be equal i.e # rows of dataset/X == # rows of labels `where 32768 !=1024`", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.0.2 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: Yes ## Information Model I am using: GPT2 @jplu When using the keras model.fit() method, it looks like there's a problem with logits and tensors: `Compilation failure: logits and labels must have the same first dimension`. Setting `from_logits = False` doesn't seem to resolve the problem. Any suggestion on how to change model compilation or the dataset to fix this? ``` # TPU and Strategy Initialization resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR']) tf.config.experimental_connect_to_cluster(resolver) tf.tpu.experimental.initialize_tpu_system(resolver) strategy = tf.distribute.TPUStrategy(resolver) # Load and compile model with strategy.scope(): model = TFGPT2LMHeadModel.from_pretrained('gpt2-medium') model.resize_token_embeddings(len(tokenizer)) optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer = optimizer, loss = loss) # Format inputs input_ids = tf.convert_to_tensor(input_ids) attention_mask = tf.convert_to_tensor(attention_mask) labels = tf.convert_to_tensor(labels) # Create dataset dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': input_ids, 'attention_mask': attention_mask}, labels)) num_examples = tf.data.experimental.cardinality(dataset).numpy() train_dataset = dataset.repeat().shuffle(num_examples).batch(8) # Train model model.fit(x = train_dataset, epochs = 1, batch_size = 8, steps_per_epoch = num_examples) --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) <ipython-input-10-4efb969ef56e> in <module>() ---> 23 model.fit(x = train_dataset, epochs = 1, batch_size = 8, steps_per_epoch = num_examples) InvalidArgumentError: 9 root error(s) found. (0) Invalid argument: {{function_node __inference_train_function_121591}} Compilation failure: logits and labels must have the same first dimension, got logits shape [32768,64] and labels shape [1024] [[{{node sparse_categorical_crossentropy_24/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits}}]] TPU compilation failed [[tpu_compile_succeeded_assert/_142128106034484350/_6]] [[tpu_compile_succeeded_assert/_142128106034484350/_6/_223]] (1) Invalid argument: {{function_node __inference_train_function_121591}} Compilation failure: logits and labels must have the same first dimension, got logits shape [32768,64] and labels shape [1024] [[{{node sparse_categorical_crossentropy_24/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits}}]] TPU compilation failed [[tpu_compile_succeeded_assert/_142128106034484350/_6]] [[tpu_compile_succeeded_assert/_142128106034484350/_6/_307]] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6832/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6831
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6831/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6831/comments
https://api.github.com/repos/huggingface/transformers/issues/6831/events
https://github.com/huggingface/transformers/pull/6831
688,783,964
MDExOlB1bGxSZXF1ZXN0NDc1OTU4MTIy
6,831
Update ONNX notebook to include section on quantization.
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=h1) Report\n> Merging [#6831](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/32fe44086c2191c4551b7ff00db7ae1cace9b02e?el=desc) will **increase** coverage by `0.66%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6831/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6831 +/- ##\n==========================================\n+ Coverage 78.10% 78.77% +0.66% \n==========================================\n Files 157 157 \n Lines 28586 28586 \n==========================================\n+ Hits 22328 22519 +191 \n+ Misses 6258 6067 -191 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.21% <0.00%> (-40.45%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `64.44% <0.00%> (-20.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.45% <0.00%> (-5.02%)` | :arrow_down: |\n| ... and [20 more](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=footer). Last update [32fe440...fb1404a](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
MEMBER
null
Added section regarding quantization with performance comparison against PyTorch on CPU. Signed-off-by: Morgan Funtowicz <[email protected]>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6831/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6831/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6831", "html_url": "https://github.com/huggingface/transformers/pull/6831", "diff_url": "https://github.com/huggingface/transformers/pull/6831.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6831.patch", "merged_at": 1598902080000 }
https://api.github.com/repos/huggingface/transformers/issues/6830
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6830/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6830/comments
https://api.github.com/repos/huggingface/transformers/issues/6830/events
https://github.com/huggingface/transformers/issues/6830
688,782,238
MDU6SXNzdWU2ODg3ODIyMzg=
6,830
Related to abstractive text summarization
{ "login": "laibamehnaz", "id": 36405283, "node_id": "MDQ6VXNlcjM2NDA1Mjgz", "avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4", "gravatar_id": "", "url": "https://api.github.com/users/laibamehnaz", "html_url": "https://github.com/laibamehnaz", "followers_url": "https://api.github.com/users/laibamehnaz/followers", "following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}", "gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}", "starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions", "organizations_url": "https://api.github.com/users/laibamehnaz/orgs", "repos_url": "https://api.github.com/users/laibamehnaz/repos", "events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}", "received_events_url": "https://api.github.com/users/laibamehnaz/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "If you are looking for summerization in non-English languages you can try using `MBartForConditionalGeneration`, or multilingual Bert using the `EncoderDecoder` framework. Not sure if xlm-r is yet supported in `EncoderDecoder`", "Looking at xlm-r source code it seems that it can be easily added in EncoderDecoder as it subclasses Roberta which is supported in EncoderDecoder", "Alright. I can try mBART and mBERT. \r\nWhat I was wondering about XLM was, if we could use it in a language modeling setting for this task, like how we use GPT for any seq2seq task.\r\n\r\nSending both text and the summary together and calculating the loss only over the summaries.", "Not sure if that'll work since it's trained with MLM and encoder only with bi-directional attention. What you described above will need a causal LM with unidirectional attention.", "EncoderDecoder class allows you to use encoder only models as both encoder and decoder and fine-tune for seq-2-seq task. Here's an example of Roberta2Roberta fine-tuned on CNN dm https://huggingface.co/patrickvonplaten/roberta2roberta-cnn_dailymail-fp16", "Makes sense. Saw XLMWithLMHead in https://github.com/huggingface/transformers/blob/master/examples/text-generation/run_generation.py, so just got curious.\r\n\r\n> Not sure if that'll work since it's trained with MLM and encoder only with bi-directional attention. What you described above will need a causal LM with unidirectional attention.\r\n\r\n", "> EncoderDecoder class allows you to use encoder only models as both encoder and decoder and fine-tune for seq-2-seq task. Here's an example of Roberta2Roberta fine-tuned on CNN dm https://huggingface.co/patrickvonplaten/roberta2roberta-cnn_dailymail-fp16\r\n\r\nOh thank you so much!", "Also what you said is doable, xlm-r can be used like a causal LM by configuring the attention mask. Might not give the best results though. See how RobertaForCausalLM is implemented. ", "> Makes sense. Saw XLMWithLMHead in https://github.com/huggingface/transformers/blob/master/examples/text-generation/run_generation.py, so just got curious.\r\n> \r\n> > Not sure if that'll work since it's trained with MLM and encoder only with bi-directional attention. What you described above will need a causal LM with unidirectional attention.\r\n\r\nAah Sorry, typo, I meant XLM-R, not xlm", "> Also what you said is doable, xlm-r can be used like a causal LM by configuring the attention mask. Might not give the best results though. See how RobertaForCausalLM is implemented.\r\n\r\nOhh, sure. Will check it out.", "Also, what will be the best way to finetune T5 in a multi-task setting.", "Also, are there any models we can use for code-switched data.", "> Also, are there any models we can use for code-switched data.\r\n\r\nNot too familiar with this, but seen few models on model hub and they used Bert.\r\nhttps://huggingface.co/sagorsarker/codeswitch-hineng-ner-lince\r\n\r\n> Also, what will be the best way to finetune T5 in a multi-task setting.\r\n\r\nIf you can cast all your tasks in text-2-text format then multi-task training can be done simply using task pre-fixes as shown in the paper. Also I think the performance will depend upon the tasks and datasets so some experimentation is necessary. Most important thing when doing multi-task is how you sample examples from different tasks. See section 3.5.2 of T5 paper.\r\n\r\nAlso the best place to ask this question would be\r\nhttps://discuss.huggingface.co/t/t5-finetuning-tips/684", "Alright, thank you so much for the help !!", "I tried using Xlmr2Xlmr but seems that regardless of what input I provide I get the same output; I checked to see the is_decoder flag is set to true in the decoder. This issue persists throughout the finetuning process", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
I was wondering if we could use XLM or XLM-R for abstractive text summarization.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6830/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6830/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6829
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6829/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6829/comments
https://api.github.com/repos/huggingface/transformers/issues/6829/events
https://github.com/huggingface/transformers/issues/6829
688,763,981
MDU6SXNzdWU2ODg3NjM5ODE=
6,829
No attribute '_mp_fn' when fine-tuning mbart for en-ro translation task using TPU
{ "login": "abedkhooli", "id": 11407254, "node_id": "MDQ6VXNlcjExNDA3MjU0", "avatar_url": "https://avatars.githubusercontent.com/u/11407254?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abedkhooli", "html_url": "https://github.com/abedkhooli", "followers_url": "https://api.github.com/users/abedkhooli/followers", "following_url": "https://api.github.com/users/abedkhooli/following{/other_user}", "gists_url": "https://api.github.com/users/abedkhooli/gists{/gist_id}", "starred_url": "https://api.github.com/users/abedkhooli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abedkhooli/subscriptions", "organizations_url": "https://api.github.com/users/abedkhooli/orgs", "repos_url": "https://api.github.com/users/abedkhooli/repos", "events_url": "https://api.github.com/users/abedkhooli/events{/privacy}", "received_events_url": "https://api.github.com/users/abedkhooli/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "@abedkhooli Could I have the command you ran + environment details so that I can try to replicate this?\r\nThanks!\r\n", "Thanks @sshleifer for looking into this. \r\nTPU type: TPU v2 which is 8 cores, 64 GB (using Google Colab)\r\n```\r\n%%bash\r\nexport ENRO_DIR='/content/wmt_en_ro' # Download instructions above\r\n#export WANDB_PROJECT=\"MT\" # optional\r\nexport MAX_LEN=32\r\nexport BS=8\r\ncd /content/transformers\r\n./mbart_enro.sh\r\n```\r\nmbart_enro.sh:\r\n```\r\n#!/usr/bin/env bash\r\nexport PYTHONPATH=\"../\":\"${PYTHONPATH}\"\r\n\r\npython examples/xla_spawn.py --num_cores 8 \\\r\n\t examples/seq2seq/finetune.py \\\r\n --learning_rate=3e-5 \\\r\n --fp16 \\\r\n --do_train \\\r\n --val_check_interval=0.25 \\\r\n --adam_eps 1e-06 \\\r\n --num_train_epochs 1 --src_lang en_XX --tgt_lang ro_RO \\\r\n --data_dir $ENRO_DIR \\\r\n --max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \\\r\n --train_batch_size=$BS --eval_batch_size=$BS \\\r\n --task translation \\\r\n --warmup_steps 500 \\\r\n --freeze_embeds \\\r\n --model_name_or_path=facebook/mbart-large-cc25 \\\r\n --output_dir enro_finetune_baseline \\\r\n --label_smoothing 0.1 \\\r\n --fp16_opt_level=O1 --sortish_sampler --n_train 5000 --n_val 500 \\\r\n \"$@\"\r\n```\r\nI believe the issue is adding the correct _mp_fn to examples/seq2seq/finetune.py that matches the main() call (I am not an experienced coder :-)).\r\n", "I see a related [PR#5960](https://github.com/huggingface/transformers/pull/5960) - does that mean moving away from [xla_spawn](https://github.com/huggingface/transformers/blob/master/examples/xla_spawn.py) ?", "That PR is stalled, I am open to using any tpu implementation that works!\r\n", "If using [xla_spawn](https://github.com/huggingface/transformers/blob/master/examples/xla_spawn.py), and adding _mp_fn(..) to [finetune.py](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py), how should it (_mp_fn) be defined?", "I don't know, great question. Maybe @LysandreJik would know the answer.", "`_mp_fn(index)` should simply be an entry point to your script that leverages `transformers.Trainer`. You can see examples of it [here](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py).\r\n\r\nPlease note that we implemented this to mimic torch's `torch.distributed.launch`. I have no idea how this would work with a `pytorch-lightning` implementation. Doesn't pytorch-lightning have its own way of managing TPU training?", "The main() function in [finetune.py](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L350) takes arguments, so _mp_fn(index) signature won't work. \r\n```\r\ndef _mp_fn(index):\r\n # For xla_spawn (TPUs)\r\n main()\r\n```\r\n`Exception in device=TPU:0: main() missing 1 required positional argument: 'args'`", "Right, but even if you manage to make it work with the args, `finetune.py` is using pytorch-lightning so it won't work with `xla_spawn.py`. You can check the [pytorch-lightning docs](https://pytorch-lightning.readthedocs.io/en/latest/tpu.html) to see how to run on TPU.", "So, [lightning_base.py](https://github.com/huggingface/transformers/blob/master/examples/lightning_base.py#L165) is not ready for TPU yet.", "This is now supported by `Seq2SeqTrainer` which doesn't use PL.\r\nSee https://github.com/huggingface/transformers/blob/master/examples/seq2seq/builtin_trainer/finetune_tpu.sh" ]
1,598
1,602
1,602
CONTRIBUTOR
null
I followed the TPU example in the [examples folder](9https://github.com/huggingface/transformers/tree/master/examples) and found xla_spawn.py calls `xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)` and [fine-tune.py](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py) does not have the "_mp_fn" found in some training scripts. I get ``` Traceback (most recent call last): File "examples/xla_spawn.py", line 72, in <module> main() File "examples/xla_spawn.py", line 68, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) AttributeError: module 'finetune' has no attribute '_mp_fn' ``` Tried to fix it by adding the _mp_fn: ``` #def _mp_fn(index): # For xla_spawn (TPUs) # pass #main() ``` with and without args `main(args)` but neither worked.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6829/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6828
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6828/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6828/comments
https://api.github.com/repos/huggingface/transformers/issues/6828/events
https://github.com/huggingface/transformers/issues/6828
688,721,619
MDU6SXNzdWU2ODg3MjE2MTk=
6,828
regarding the max token length of longformer
{ "login": "rkoystart", "id": 64691602, "node_id": "MDQ6VXNlcjY0NjkxNjAy", "avatar_url": "https://avatars.githubusercontent.com/u/64691602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rkoystart", "html_url": "https://github.com/rkoystart", "followers_url": "https://api.github.com/users/rkoystart/followers", "following_url": "https://api.github.com/users/rkoystart/following{/other_user}", "gists_url": "https://api.github.com/users/rkoystart/gists{/gist_id}", "starred_url": "https://api.github.com/users/rkoystart/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rkoystart/subscriptions", "organizations_url": "https://api.github.com/users/rkoystart/orgs", "repos_url": "https://api.github.com/users/rkoystart/repos", "events_url": "https://api.github.com/users/rkoystart/events{/privacy}", "received_events_url": "https://api.github.com/users/rkoystart/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi, @rkoystart \r\nI think this notebook will [help](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb)", "@patil-suraj so it means by default the longformers model provided by huggingface supports maximum tokens of 4096 right ?\r\nif suppose we want to pretrained model to support for even more longer sentences than 4096 we have to follow the instructions in the notebook you have mentioned above\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,605
1,605
NONE
null
In the encode_plus function of the Tokenizer , there is a argument called max_length whose default value is 4096. So is it possible to increase the max token length beyond 4096 or the maximum value of max_length argument is 4096 Can anyone clear my doubt Thanks in advance !!!!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6828/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6827
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6827/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6827/comments
https://api.github.com/repos/huggingface/transformers/issues/6827/events
https://github.com/huggingface/transformers/pull/6827
688,679,819
MDExOlB1bGxSZXF1ZXN0NDc1ODgyNTU0
6,827
Add model card for singbert lite.
{ "login": "zyuanlim", "id": 7169731, "node_id": "MDQ6VXNlcjcxNjk3MzE=", "avatar_url": "https://avatars.githubusercontent.com/u/7169731?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zyuanlim", "html_url": "https://github.com/zyuanlim", "followers_url": "https://api.github.com/users/zyuanlim/followers", "following_url": "https://api.github.com/users/zyuanlim/following{/other_user}", "gists_url": "https://api.github.com/users/zyuanlim/gists{/gist_id}", "starred_url": "https://api.github.com/users/zyuanlim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zyuanlim/subscriptions", "organizations_url": "https://api.github.com/users/zyuanlim/orgs", "repos_url": "https://api.github.com/users/zyuanlim/repos", "events_url": "https://api.github.com/users/zyuanlim/events{/privacy}", "received_events_url": "https://api.github.com/users/zyuanlim/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=h1) Report\n> Merging [#6827](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/22933e661fe789874ef58b13d3a9bb2554ba5891?el=desc) will **decrease** coverage by `0.09%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6827/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6827 +/- ##\n==========================================\n- Coverage 80.02% 79.93% -0.10% \n==========================================\n Files 157 157 \n Lines 28586 28586 \n==========================================\n- Hits 22877 22851 -26 \n- Misses 5709 5735 +26 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (+7.18%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=footer). Last update [22933e6...ceb655f](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
Add model card for singbert lite and update widget for singbert and singbert large.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6827/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6827", "html_url": "https://github.com/huggingface/transformers/pull/6827", "diff_url": "https://github.com/huggingface/transformers/pull/6827.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6827.patch", "merged_at": 1598782909000 }
https://api.github.com/repos/huggingface/transformers/issues/6826
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6826/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6826/comments
https://api.github.com/repos/huggingface/transformers/issues/6826/events
https://github.com/huggingface/transformers/issues/6826
688,666,524
MDU6SXNzdWU2ODg2NjY1MjQ=
6,826
Loading a converted pytorch model in huggingface transformers properly
{ "login": "AdirthaBorgohain", "id": 32612696, "node_id": "MDQ6VXNlcjMyNjEyNjk2", "avatar_url": "https://avatars.githubusercontent.com/u/32612696?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AdirthaBorgohain", "html_url": "https://github.com/AdirthaBorgohain", "followers_url": "https://api.github.com/users/AdirthaBorgohain/followers", "following_url": "https://api.github.com/users/AdirthaBorgohain/following{/other_user}", "gists_url": "https://api.github.com/users/AdirthaBorgohain/gists{/gist_id}", "starred_url": "https://api.github.com/users/AdirthaBorgohain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AdirthaBorgohain/subscriptions", "organizations_url": "https://api.github.com/users/AdirthaBorgohain/orgs", "repos_url": "https://api.github.com/users/AdirthaBorgohain/repos", "events_url": "https://api.github.com/users/AdirthaBorgohain/events{/privacy}", "received_events_url": "https://api.github.com/users/AdirthaBorgohain/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I converted a pre-trained tf model to pytorch using the following function. ``` def convert_tf_checkpoint_to_pytorch(*, tf_checkpoint_path, albert_config_file, pytorch_dump_path): # Initialise PyTorch model config = AlbertConfig.from_json_file(albert_config_file) print("Building PyTorch model from configuration: {}".format(str(config))) model = AlbertForPreTraining(config) # Load weights from tf checkpoint load_tf_weights_in_albert(model, config, tf_checkpoint_path) # Save pytorch-model print("Save PyTorch model to {}".format(pytorch_dump_path)) torch.save(model.state_dict(), pytorch_dump_path) ``` I am loading the converted model and encoding sentences in the following way: ``` def vectorize_sentence(text): albert_tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2") config = AlbertConfig.from_pretrained(config_path, output_hidden_states=True) model = TFAlbertModel.from_pretrained(pytorch_dir, config=config, from_pt=True) e = albert_tokenizer.encode(text, max_length=512) model_input = tf.constant(e)[None, :] # Batch size 1 output = model(model_input) v = [0] * 768 # generate sentence vectors by averaging the word vectors for i in range(1, len(model_input[0]) - 1): v = v + output[0][0][i].numpy() vector = v/len(model_input[0]) return vector ``` However while loading the model, a warning comes up: > Some weights or buffers of the PyTorch model TFAlbertModel were not > initialized from the TF 2.0 model and are newly initialized: > ['predictions.LayerNorm.bias', 'predictions.dense.weight', > 'predictions.LayerNorm.weight', 'sop_classifier.classifier.bias', > 'predictions.dense.bias', 'sop_classifier.classifier.weight', > 'predictions.decoder.bias', 'predictions.bias', > 'predictions.decoder.weight'] You should probably TRAIN this model on > a down-stream task to be able to use it for predictions and inference. Can anyone tell me if I am doing anything wrong? What does the warning mean? I saw issue #5588. Don't know if my issue is the same as this. <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**: https://stackoverflow.com/questions/63648380/loading-a-converted-pytorch-model-in-huggingface-transformers-properly
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6826/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6826/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6825
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6825/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6825/comments
https://api.github.com/repos/huggingface/transformers/issues/6825/events
https://github.com/huggingface/transformers/pull/6825
688,660,395
MDExOlB1bGxSZXF1ZXN0NDc1ODY4MzY0
6,825
Fixed open in colab link
{ "login": "PandaWhoCodes", "id": 6967017, "node_id": "MDQ6VXNlcjY5NjcwMTc=", "avatar_url": "https://avatars.githubusercontent.com/u/6967017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PandaWhoCodes", "html_url": "https://github.com/PandaWhoCodes", "followers_url": "https://api.github.com/users/PandaWhoCodes/followers", "following_url": "https://api.github.com/users/PandaWhoCodes/following{/other_user}", "gists_url": "https://api.github.com/users/PandaWhoCodes/gists{/gist_id}", "starred_url": "https://api.github.com/users/PandaWhoCodes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PandaWhoCodes/subscriptions", "organizations_url": "https://api.github.com/users/PandaWhoCodes/orgs", "repos_url": "https://api.github.com/users/PandaWhoCodes/repos", "events_url": "https://api.github.com/users/PandaWhoCodes/events{/privacy}", "received_events_url": "https://api.github.com/users/PandaWhoCodes/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=h1) Report\n> Merging [#6825](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/22933e661fe789874ef58b13d3a9bb2554ba5891?el=desc) will **increase** coverage by `0.20%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6825/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6825 +/- ##\n==========================================\n+ Coverage 80.02% 80.23% +0.20% \n==========================================\n Files 157 157 \n Lines 28586 28586 \n==========================================\n+ Hits 22877 22936 +59 \n+ Misses 5709 5650 -59 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `59.43% <0.00%> (-35.85%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (+7.18%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <0.00%> (+57.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=footer). Last update [22933e6...747ed9e](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
Changed non existant link - https://colab.research.google.com/github/huggingface/transformers/blob/master/notebooks/03-pipelines.ipynb to - https://colab.research.google.com/github/huggingface/transformers/blob/master/notebooks/03-pipelines.ipynb
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6825/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6825", "html_url": "https://github.com/huggingface/transformers/pull/6825", "diff_url": "https://github.com/huggingface/transformers/pull/6825.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6825.patch", "merged_at": 1598782861000 }
https://api.github.com/repos/huggingface/transformers/issues/6824
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6824/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6824/comments
https://api.github.com/repos/huggingface/transformers/issues/6824/events
https://github.com/huggingface/transformers/issues/6824
688,654,030
MDU6SXNzdWU2ODg2NTQwMzA=
6,824
How to convert '.bin' model to '.onnx'
{ "login": "AITutorials", "id": 61530230, "node_id": "MDQ6VXNlcjYxNTMwMjMw", "avatar_url": "https://avatars.githubusercontent.com/u/61530230?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AITutorials", "html_url": "https://github.com/AITutorials", "followers_url": "https://api.github.com/users/AITutorials/followers", "following_url": "https://api.github.com/users/AITutorials/following{/other_user}", "gists_url": "https://api.github.com/users/AITutorials/gists{/gist_id}", "starred_url": "https://api.github.com/users/AITutorials/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AITutorials/subscriptions", "organizations_url": "https://api.github.com/users/AITutorials/orgs", "repos_url": "https://api.github.com/users/AITutorials/repos", "events_url": "https://api.github.com/users/AITutorials/events{/privacy}", "received_events_url": "https://api.github.com/users/AITutorials/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6824/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6824/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6823
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6823/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6823/comments
https://api.github.com/repos/huggingface/transformers/issues/6823/events
https://github.com/huggingface/transformers/issues/6823
688,635,353
MDU6SXNzdWU2ODg2MzUzNTM=
6,823
How to use encode_plus to force padding to specific length
{ "login": "adriangrepo", "id": 37989457, "node_id": "MDQ6VXNlcjM3OTg5NDU3", "avatar_url": "https://avatars.githubusercontent.com/u/37989457?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adriangrepo", "html_url": "https://github.com/adriangrepo", "followers_url": "https://api.github.com/users/adriangrepo/followers", "following_url": "https://api.github.com/users/adriangrepo/following{/other_user}", "gists_url": "https://api.github.com/users/adriangrepo/gists{/gist_id}", "starred_url": "https://api.github.com/users/adriangrepo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adriangrepo/subscriptions", "organizations_url": "https://api.github.com/users/adriangrepo/orgs", "repos_url": "https://api.github.com/users/adriangrepo/repos", "events_url": "https://api.github.com/users/adriangrepo/events{/privacy}", "received_events_url": "https://api.github.com/users/adriangrepo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I worked out that if you set padding=False then the text is padded out correctly with input_ids and attention_mask values of 0. This is the opposite setting of what I thought padding to mean but it works.\r\n\r\n'''\r\n\r\n >>input_ids: torch.Size([1, 50]), attention_mask: torch.Size([1, 50]) #pre flatten\r\n\r\n```" ]
1,598
1,598
1,598
NONE
null
I am using the following code in the __getitem__() method of my dataset: ` class MyDataset(Dataset): def __init__(self, myargs): #other code here self.tokenizer = BertTokenizer.from_pretrained('bert-base-cased) self.max_len =50 ` __getitem__ method: ` def __getitem__(self, idx): #other code here text_encoded = self.tokenizer.encode_plus( text, add_special_tokens=True, padding=True, truncation=True, max_length=self.max_len, return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt',) input_ids=text_encoded['input_ids'].flatten() attention_mask=text_encoded['attention_mask'].flatten() >>input_ids: torch.Size([9]), attention_mask: torch.Size([9]) >>input_ids: torch.Size([21]), attention_mask: torch.Size([21]) ` Even though I have set padding and truncation to True and set a max_length the returned lengths of the input_ids and attention_mask values in the retured text_encoded dict are variable depending on the input text. Is this normal behaviour? if so how can I ensure that ecery rturned sample is padded out to and truncated at a specific length?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6823/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6822
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6822/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6822/comments
https://api.github.com/repos/huggingface/transformers/issues/6822/events
https://github.com/huggingface/transformers/pull/6822
688,611,179
MDExOlB1bGxSZXF1ZXN0NDc1ODMzMzk2
6,822
[s2s README] link to cnn dataset with empty lines removed
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6822/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6822", "html_url": "https://github.com/huggingface/transformers/pull/6822", "diff_url": "https://github.com/huggingface/transformers/pull/6822.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6822.patch", "merged_at": null }