url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
โ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
โ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/5520 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5520/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5520/comments | https://api.github.com/repos/huggingface/transformers/issues/5520/events | https://github.com/huggingface/transformers/issues/5520 | 650,950,830 | MDU6SXNzdWU2NTA5NTA4MzA= | 5,520 | AdamW step device error | {
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | I have a model:
```
class M(nn.Module):
def __init__(self, ...):
self.A = nn.Parameter(...)
self.B = nn.Parameter(...)
self.C = torch.einsum(..., self.A, self.B)
def forward(self, D):
return func(self.C.to(D.device), D)
```
However, I'm getting the following error when training this model on GPU:
```
File "file.py", line 71, in optimizer_step
optimizer.step()
File ".../lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 67, in wrapper
return wrapped(*args, **kwargs)
File ".../lib/python3.7/site-packages/transformers/optimization.py", line 155, in step
exp_avg.mul_(beta1).add_(grad, alpha=1.0 - beta1)
RuntimeError: expected device cuda:0 but got device cpu
```
This is happening in 2.11.0. Am I doing something incorrectly here? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5520/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5519 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5519/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5519/comments | https://api.github.com/repos/huggingface/transformers/issues/5519/events | https://github.com/huggingface/transformers/issues/5519 | 650,950,674 | MDU6SXNzdWU2NTA5NTA2NzQ= | 5,519 | Get prediction_scores from BART forward method | {
"login": "mmsamiei",
"id": 12582703,
"node_id": "MDQ6VXNlcjEyNTgyNzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/12582703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmsamiei",
"html_url": "https://github.com/mmsamiei",
"followers_url": "https://api.github.com/users/mmsamiei/followers",
"following_url": "https://api.github.com/users/mmsamiei/following{/other_user}",
"gists_url": "https://api.github.com/users/mmsamiei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmsamiei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmsamiei/subscriptions",
"organizations_url": "https://api.github.com/users/mmsamiei/orgs",
"repos_url": "https://api.github.com/users/mmsamiei/repos",
"events_url": "https://api.github.com/users/mmsamiei/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmsamiei/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I just noticed the same.\r\n\r\nIt seems that you have to include the `labels` parameter as well to get the predictions for your `decoder_input_ids`. You didn't have to do this in versions prior to 3.0, so it appears as if something has changed so that it's required now.\r\n\r\nMaybe @sshleifer knows since I think he's been doing quite a bit of work on the summarization bits as of late.",
"Great catch. I think you are spot on that the API changed a bit in 3.0. We should have documented it better.\r\n\r\nIf you pass `use_cache=False` to `model() this problem goes away. (use_cache is set to true by default to speed up seq2seq tasks).\r\nYou can also pass `use_cache=False` to `from_pretrained`, as shown below:\r\n\r\n```python\r\ntokenizer = BartTokenizer.from_pretrained('facebook/bart-large')\r\nmodel = BartForConditionalGeneration.from_pretrained('facebook/bart-large', use_cache=False)\r\ninputs = tokenizer.encode(\" Hello, my dog is cute\", return_tensors=\"pt\")\r\ndecoder_input = tokenizer.encode(\" Oh! I don't know that you have dog! How is it?\", return_tensors=\"pt\")\r\noutput = model(input_ids=inputs,decoder_input_ids=decoder_input)[0]\r\nassert output.shape[1] ==17 # passes\r\n```",
"> Great catch. I think you are spot on that the API changed a bit in 3.0. We should have documented it better.\r\n> \r\n> If you pass `use_cache=False` to `model() this problem goes away. (use_cache is set to true by default to speed up seq2seq tasks). You can also pass `use_cache=False`to`from_pretrained`, as shown below:\r\n> \r\n> ```python\r\n> tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')\r\n> model = BartForConditionalGeneration.from_pretrained('facebook/bart-large', use_cache=False)\r\n> inputs = tokenizer.encode(\" Hello, my dog is cute\", return_tensors=\"pt\")\r\n> decoder_input = tokenizer.encode(\" Oh! I don't know that you have dog! How is it?\", return_tensors=\"pt\")\r\n> output = model(input_ids=inputs,decoder_input_ids=decoder_input)[0]\r\n> assert output.shape[1] ==17 # passes\r\n> ```\r\n\r\nThanks a lot! I think it's better to publish a post which explains differences between version 2 and 3.0 "
] | 1,593 | 1,594 | 1,594 | NONE | null | # โ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I'm trynig to implement a model using finetuining BART for dialouge task. This is my sample code:
```
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large')
inputs = tokenizer.encode(" Hello, my dog is cute", return_tensors="pt")
decoder_input = tokenizer.encode(" Oh! I don't know that you have dog! How is it?", return_tensors="pt")
output = model(input_ids=inputs,decoder_input_ids=decoder_input)[0]
```
I want to get prediction scores for tokens, so I expected that output would have shape [1, 17, 50265] (The length of the decoder_input string), but it has shape [1, 1, 50265].
How can I get the prediction_scores?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5519/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5519/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5518 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5518/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5518/comments | https://api.github.com/repos/huggingface/transformers/issues/5518/events | https://github.com/huggingface/transformers/pull/5518 | 650,911,929 | MDExOlB1bGxSZXF1ZXN0NDQ0Mjk4NzI3 | 5,518 | Make T5 compatible with ONNX | {
"login": "abelriboulot",
"id": 34995848,
"node_id": "MDQ6VXNlcjM0OTk1ODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/34995848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abelriboulot",
"html_url": "https://github.com/abelriboulot",
"followers_url": "https://api.github.com/users/abelriboulot/followers",
"following_url": "https://api.github.com/users/abelriboulot/following{/other_user}",
"gists_url": "https://api.github.com/users/abelriboulot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abelriboulot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abelriboulot/subscriptions",
"organizations_url": "https://api.github.com/users/abelriboulot/orgs",
"repos_url": "https://api.github.com/users/abelriboulot/repos",
"events_url": "https://api.github.com/users/abelriboulot/events{/privacy}",
"received_events_url": "https://api.github.com/users/abelriboulot/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5518?src=pr&el=h1) Report\n> Merging [#5518](https://codecov.io/gh/huggingface/transformers/pull/5518?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `1.02%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5518?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5518 +/- ##\n==========================================\n- Coverage 77.83% 76.81% -1.03% \n==========================================\n Files 141 141 \n Lines 24634 24637 +3 \n==========================================\n- Hits 19175 18925 -250 \n- Misses 5459 5712 +253 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5518?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5518/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `84.44% <100.00%> (+0.09%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5518/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5518/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5518/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5518/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+1.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5518/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5518/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5518?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5518?src=pr&el=footer). Last update [58cca47...904fa94](https://codecov.io/gh/huggingface/transformers/pull/5518?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks a lot for the review @mfuntowicz! I adjusted it to the coding style you outlined. Feel free to merge if you're happy with it.",
"LGTM! Thanks @abelriboulot, great addition ๐ ",
"Hey, did you happen to make a colab which shows this off? I was trying to figure out exporting T5 as ONNX a week ago, but got stuck. It seems you've fixed it though?",
"@ConProgramming sure thing, Iโll share something this weekend!",
"@abelriboulot Did you ever get around to making that colab? It'd help a lot. ๐
",
"Hey @ConProgramming, I had a very ad-hoc solution for this, therefore I worked on a PR to make the huggingface conversion compatible with all models with a compatible graph. You can take a look at it there: #5687\r\nIf you pull this version you should be able to export T5 with the following line:\r\n`python convert_graph_to_onnx.py --framework pt --model t5-base ~/test-t5/t5.onnx --check-loading --opset 12`\r\n\r\nI checked and it seems to work well! Let me know if it works for you.",
"Thanks @abelriboulot, but I'm still having some issues with it... it works with t5-base, but depending on how I provide the path to my own model I get two different errors:\r\n\r\n- `!python transformers/src/transformers/convert_graph_to_onnx.py --framework pt --model \"drive/My Drive/paraphraser/t5_paraphrase/pytorch_model.bin\" onnx/paraphraser.onnx --check-loading --opset 12` : Error while converting the model: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte\r\n- `drive/My Drive/paraphraser/t5_paraphrase` : Error while converting the model: Model name 'drive/My Drive/paraphraser/t5_paraphrase' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 'drive/My Drive/paraphraser/t5_paraphrase' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\r\n\r\nIs it designed to work with finetuned models?",
"Hey @ConProgramming, it should work on fine tuned models, you can have a look at the test_onnx file as an example of this. The model path should be to the directory that contains the model (and the tokenizer in case you do not specify it). It looks like the second error relates to not being able to find a tokenizer, is it present in your directory? If you are using another directory / pretrained model you can specify it with --tokenizer\r\nIf you still have issues and it's something you can share, I'm happy to have a look and help you with this.",
"@abelriboulot Adding `--tokenizer t5-base` fixed the issue and exported a model without any errors... looks like it worked, thanks again!!",
"Oh awesome! Great to hear it! I might add a message to make it more obvious to the user.",
"@abelriboulot \r\n\r\nI tried this (for cpu):\r\nconvert_graph_to_onnx.py --framework=pt --tokenizer=t5-base --model=t5-base onnx\\t5.onnx --check-loading --opset=12\r\n\r\n but getting error:\r\n\r\nONNX opset version set to: 12\r\nLoading pipeline (model: t5-base, tokenizer: t5-base)\r\nSome weights of T5Model were not initialized from the model checkpoint at t5-base and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nUsing framework PyTorch: 1.5.1+cpu\r\nFound input input_ids with shape: {0: 'batch', 1: 'sequence'}\r\nFound input attention_mask with shape: {0: 'batch', 1: 'sequence'}\r\nFound output output_0 with shape: {0: 'batch', 1: 'sequence'}\r\n**Error while converting the model: 'BaseModelOutputWithPast' object has no attribute 'shape'**\r\n\r\nAm I doing something wrong here?",
"Hey @oliversms, are you using the specific fork or master? I can confirm the command you submitted works on my side.",
"Apologies for the delayed reply; Im actually using the fork. I beleive it may have been an env related issue. However after getting past that issue Im now running into a new issue:\r\nSpecifically on this line:\r\n[tokens = nlp.tokenizer(\"This is a sample output\", return_tensors=framework)](https://github.com/abelriboulot/transformers/blob/58cca47c16149e43d1b516623d59e3c5d97f695e/src/transformers/convert_graph_to_onnx.py#L89)\r\ngetting this error: \r\nValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.\r\n\r\nAttempting set padding & truncation to True doesnt fix the issue.",
"Hey @oliversms! It looks like you are not using the right branch. You need [this specific branch](https://github.com/abelriboulot/transformers/tree/update_convert_to_onnx) for it to work. Hope it works for you!\r\n\r\nAbel",
"If anyone needs, I created a small package ([onnxt5](https://github.com/abelriboulot/onnxt5)) which lets you easily and efficiently export T5 and serve it! Feel free to raise issues, it's an alpha at the moment.",
"@abelriboulot Hi, I pulled your branch and tried to convert a t5-base with \r\npython ../transformers/src/convert_graph_to_onnx.py --framework pt --model t5-base t5-base.onnx --check-loading --opset 12\r\n\r\nand still got the \"Error while converting the model: You have to specify either decoder_input_ids or decoder_inputs_embeds\" Any ideas?"
] | 1,593 | 1,603 | 1,594 | CONTRIBUTOR | null | This is a small PR to make T5 exportable to ONNX with any op>9. It addresses an issue outlined in #5075 where T5 would not export to ONNX. In order to make it exportable, 2 changes are made:
- A torch.einsum is replaced with a tensor multiplication in 96d0ec7 since onnx does not currently support this notation
- Decoder inputs / embeddings are defaulted to the encoder's inputs / embeddings if they are not declared. I believe this is clearer as most of the examples right now include something along the lines of `model(input_ids=input_ids, decoder_input_ids=input_ids)`. It also allows t5 to be executed with the more common paradigm of calls like model(inputs) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5518/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5518",
"html_url": "https://github.com/huggingface/transformers/pull/5518",
"diff_url": "https://github.com/huggingface/transformers/pull/5518.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5518.patch",
"merged_at": 1594114349000
} |
https://api.github.com/repos/huggingface/transformers/issues/5517 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5517/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5517/comments | https://api.github.com/repos/huggingface/transformers/issues/5517/events | https://github.com/huggingface/transformers/issues/5517 | 650,880,601 | MDU6SXNzdWU2NTA4ODA2MDE= | 5,517 | getting different model result from tokenizer vs tokenizer.encode function | {
"login": "monk1337",
"id": 17107749,
"node_id": "MDQ6VXNlcjE3MTA3NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monk1337",
"html_url": "https://github.com/monk1337",
"followers_url": "https://api.github.com/users/monk1337/followers",
"following_url": "https://api.github.com/users/monk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monk1337/subscriptions",
"organizations_url": "https://api.github.com/users/monk1337/orgs",
"repos_url": "https://api.github.com/users/monk1337/repos",
"events_url": "https://api.github.com/users/monk1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/monk1337/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,593 | 1,594 | 1,594 | NONE | null | # ๐ Bug
I am using gpt2 model to encode sentences. I am confused between tokenizer.encode vs tokenizer :
If I am using tokenizer.encode:
```
sentence = 'checking single sentences'
from transformers import GPT2Tokenizer, GPT2Model
import torch
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = GPT2Model.from_pretrained('gpt2-medium')
input_ids = torch.tensor(tokenizer.encode(sentence)).unsqueeze(0) # Batch size 1
outputs = model(input_ids)
outputs[0]
```
output is :
```
tensor([[[ 0.6804, 0.4182, 0.3483, ..., -0.3102, 0.0341, 0.4901],
[-0.4060, 0.7790, 0.2695, ..., -0.4763, 0.1817, 0.0600],
[ 0.7916, 0.6078, 0.4642, ..., -0.5557, -0.1571, -0.1220]]],
grad_fn=<ViewBackward>)
```
While using tokenizer alone :
```
from transformers import GPT2Tokenizer, GPT2Model
sentence = 'checking single sentences'
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = GPT2Model.from_pretrained('gpt2-medium')
inputs = tokenizer(sentence, return_tensors="pt")
outputs = model(**inputs)
outputs[0]
```
output is :
```
# tensor([[[ 0.5511, 0.4063, 0.2453, ..., -0.4217, 0.1519, 0.0898],
# [-0.1138, -0.0232, 0.1736, ..., -0.5408, 0.0145, 0.2985],
# [ 0.1856, 0.3127, 0.2430, ..., -0.7532, -0.2332, 0.1506]]],
# grad_fn=<ViewBackward>)
```
I tried to examine the output of both tokenizer :
```
from transformers import GPT2Tokenizer, GPT2Model
import torch
sentence = 'checking single sentences'
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
input_ids = tokenizer.encode(sentence)
inputs = tokenizer(sentence, return_tensors="pt")
print(input_ids)
print(inputs)
```
output :
```
[41004, 2060, 13439]
{'input_ids': tensor([[41004, 2060, 13439]]), 'token_type_ids': tensor([[0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1]])}
```
Which is recommended tokenizer function?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5517/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5516 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5516/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5516/comments | https://api.github.com/repos/huggingface/transformers/issues/5516/events | https://github.com/huggingface/transformers/pull/5516 | 650,869,781 | MDExOlB1bGxSZXF1ZXN0NDQ0MjY5MzYz | 5,516 | Addition of a DialoguePipeline | {
"login": "guillaume-be",
"id": 27071604,
"node_id": "MDQ6VXNlcjI3MDcxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaume-be",
"html_url": "https://github.com/guillaume-be",
"followers_url": "https://api.github.com/users/guillaume-be/followers",
"following_url": "https://api.github.com/users/guillaume-be/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions",
"organizations_url": "https://api.github.com/users/guillaume-be/orgs",
"repos_url": "https://api.github.com/users/guillaume-be/repos",
"events_url": "https://api.github.com/users/guillaume-be/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaume-be/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is a back-port of https://github.com/guillaume-be/rust-bert/pull/57. I did not implement the ConversationManager as I felt it did not quite fit the general API of this library. I however added the concept of `Conversations` keeping track of past user inputs, generated responses and the history token ids. Conversations include a print option that will display the entire dialogue.\r\n\r\n```python\r\nprint(conversation)\r\n```\r\n```\r\nConversation id: 2716da3e-8cde-4071-97bc-218d88764b7b \r\nuser >> What's the last book you have read? \r\nbot >> The Last Question \r\nuser >> Why do you recommend it? \r\nbot >> It's a good book. \r\n```\r\n\r\n(ps: note that this example is the response of `DialoGPT-medium` without sampling, which is an interesting coincidence for a computer-generated response)",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5516?src=pr&el=h1) Report\n> Merging [#5516](https://codecov.io/gh/huggingface/transformers/pull/5516?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/91cb95461e438dc57555c4f57f8ce95a56328036&el=desc) will **increase** coverage by `1.49%`.\n> The diff coverage is `84.34%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5516?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5516 +/- ##\n==========================================\n+ Coverage 78.35% 79.85% +1.49% \n==========================================\n Files 146 146 \n Lines 26454 26568 +114 \n==========================================\n+ Hits 20729 21215 +486 \n+ Misses 5725 5353 -372 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5516?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <รธ> (รธ)` | |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.36% <84.34%> (+0.86%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `71.83% <0.00%> (-23.95%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.65% <0.00%> (-23.68%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.36% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `66.66% <0.00%> (+3.63%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+34.61%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5516?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5516?src=pr&el=footer). Last update [91cb954...9734829](https://codecov.io/gh/huggingface/transformers/pull/5516?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@patrickvonplaten Thank you so much for the detailed review! I have pushed some changes addressing most of the suggested changes.\r\n\r\nA few points still open for discussion:\r\n- I am still unsure about using the default `max_length` from the model_config, especially since the default model used for the pipeline `DialoGPT` does not give this value and it would default to a very low value.\r\n- I have updated the history truncation to take out entire spans (delimited by EOS tokens). I believe this would lead to a more natural truncation of a conversation (skip the first N turn) rather than possibly cutting an input/response in the middle\r\n- You are right that the API differs a bit from the other pipeline (although the QA pipeline also takes special inputs). Here the challenge is that the conversations are by definition stateful. I actually started the Rust implementation storing the conversations in the model itself, allowing simpler inputs to be passed on. This requires the model to be mutable, which can be problematic for multithreading. Now in Python you may not have this issue, but you may want to deploy an conversational application that scales and deploys multiple conversation pipeline workers / containers (the Rust compiler actually points out to an interesting design decision). If you make these stateful, you'll have to ensure that you always send the conversation to the correct worker which can be sub-optimal for load balancing. I believe carrying the history in the conversation itself would be easier to handle for the full system that may leverage this pipeline. Happy to change if you'd like to allow the user to pass a simpler input, but a conversation `id` would probably be required anyway. In any case happy to expose `Conversation` to the library API if the design remains as is.\r\n\r\nThanks again for the feedback - looking forward to your thoughts on these few points.",
"Thanks a lot for the detailed answer @guillaume-be,\r\n\r\n1. (answering to your comment above) - exactly, I would update the config on AWS \r\n and it would then look like this:\r\n```\r\n{\r\n \"activation_function\": \"gelu_new\",\r\n \"architectures\": [\r\n \"GPT2LMHeadModel\"\r\n ],\r\n \"attn_pdrop\": 0.1,\r\n \"bos_token_id\": 50256,\r\n \"embd_pdrop\": 0.1,\r\n \"eos_token_id\": 50256,\r\n \"initializer_range\": 0.02,\r\n \"layer_norm_epsilon\": 1e-05,\r\n \"model_type\": \"gpt2\",\r\n \"n_ctx\": 1024,\r\n \"n_embd\": 1024,\r\n \"n_head\": 16,\r\n \"n_layer\": 24,\r\n \"n_positions\": 1024,\r\n \"resid_pdrop\": 0.1,\r\n \"summary_activation\": null,\r\n \"summary_first_dropout\": 0.1,\r\n \"summary_proj_to_labels\": true,\r\n \"summary_type\": \"cls_index\",\r\n \"summary_use_proj\": true,\r\n \"vocab_size\": 50257,\r\n \"task_specific_params\" : {\r\n \"dialogue\": { \r\n \"max_length\": 1000\r\n }\r\n }\r\n}\r\n```\r\n=> This way in the `__init__` of `Pipeline` the `task_specific_params` will overwrite the default params and the `max_length` will be 1000. I think this has following advantage: If the user wants to define a specific `max_length` via the model config, he could update or create a config to `model.config.task_specific_params.dialogue.max_length = 1000` and does not have to input any generate kwargs to the `__call__` function (otherwise the config would always be overwritten by 1000). As soon as we merge this PR, I can update all the DialoGPT configs with the `task_specific_params`.\r\n\r\n2. Very nice! This indeed makes more sense and should lead to good (possible never-ending chats) :-) \r\n\r\n3. I agree - I think I'm also in favor of your design choice here. Sadly, I don't have the necessary Rust knowledge to have a better understanding of the consequences of stateful vs. stateless, but it makes a lot of sense what you say! Maybe looping in @mfuntowicz here to see what he thinks :-) ",
"Thanks @guillaume-be for your PR!\r\n\r\nI left a bunch of comments, but this is in good shape, can't wait to use it on https://huggingface.co/microsoft/DialoGPT-large ๐ \r\n\r\nRe. statefulness, yes, we definitely want to keep a stateless system.\r\n\r\nAs an example, the way our https://convai.huggingface.co/ demo from last year works (and probably the way our conversational widget powered by this PR will work too ๐) is, we store an array of `Message` on the client side, grow it, and send it with each request. It's text so the bandwidth is not an issue.\r\n\r\nMessage looks like:\r\n\r\n```\r\ninterface Message {\r\n\tincoming: boolean; // <- is it from bot or user\r\n\tcontent: string;\r\n}\r\n```\r\n\r\n",
"LGTM",
"@julien-c @patrickvonplaten I believe all comments have been addressed - please let me know if I have missed anything. Just resolved the conflict with master. Getting an error with `code quality`, not quite sure what is wrong as I did not change the torch requirement with this PR.",
"Given that `patrickvonplaten` is off for one more week I believe, do you want to give this PR a last look-over @sgugger and merge if it's fine?",
"@LysandreJik Thank you very much for the review. Good catch on the behaviour of the `eos` token cut-off. I have updated based on your suggestions, and added docstrings to the `Conversation` class. I have also added both `Conversation` and `ConversationalPipeline` to the top-level `__init__` for consistency with the other pipelines.",
"Thanks for the PR @guillaume-be \r\nDocstrings could be improved but I'll clean up the docs in the pipelines file soon, so will take of that. For future PRs, please remember that `thing` will render thing in italics in the docs, and not in code (you have to use ``thing`` or :obj:`thing`).",
"@sgugger Thank you for the review - was indeed a typo on my end. The tests got triggered again and unfortunately a hash verification on torch fails. Could you please restart the build if you have a chance?",
"This is awesome, congrats everyone on shipping this! ๐ฅ",
"`test_torch_conversation` and `test_integration_torch_conversation` are broken on github actions CI. Could someone fix or delete? https://github.com/huggingface/transformers/runs/1289790225?check_suite_focus=true",
"Will take a look.",
"Should be fixed in https://github.com/huggingface/transformers/pull/7970"
] | 1,593 | 1,603 | 1,596 | CONTRIBUTOR | null | - Addition of a Conversation object to keep track of multi-turn conversation
- Creation of a DialoguePipeline to process Conversations using the history context
- Integration tests for DialoguePipeline using `microsoft/DialoGPT-medium` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5516/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5516/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5516",
"html_url": "https://github.com/huggingface/transformers/pull/5516",
"diff_url": "https://github.com/huggingface/transformers/pull/5516.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5516.patch",
"merged_at": 1596132700000
} |
https://api.github.com/repos/huggingface/transformers/issues/5515 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5515/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5515/comments | https://api.github.com/repos/huggingface/transformers/issues/5515/events | https://github.com/huggingface/transformers/pull/5515 | 650,861,032 | MDExOlB1bGxSZXF1ZXN0NDQ0MjYzMjYz | 5,515 | Create model card | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5515/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5515",
"html_url": "https://github.com/huggingface/transformers/pull/5515",
"diff_url": "https://github.com/huggingface/transformers/pull/5515.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5515.patch",
"merged_at": 1594118350000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5514 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5514/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5514/comments | https://api.github.com/repos/huggingface/transformers/issues/5514/events | https://github.com/huggingface/transformers/pull/5514 | 650,859,890 | MDExOlB1bGxSZXF1ZXN0NDQ0MjYyNDQ2 | 5,514 | added model card for ukr-roberta-base | {
"login": "vitaliyradchenko",
"id": 13647822,
"node_id": "MDQ6VXNlcjEzNjQ3ODIy",
"avatar_url": "https://avatars.githubusercontent.com/u/13647822?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitaliyradchenko",
"html_url": "https://github.com/vitaliyradchenko",
"followers_url": "https://api.github.com/users/vitaliyradchenko/followers",
"following_url": "https://api.github.com/users/vitaliyradchenko/following{/other_user}",
"gists_url": "https://api.github.com/users/vitaliyradchenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitaliyradchenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitaliyradchenko/subscriptions",
"organizations_url": "https://api.github.com/users/vitaliyradchenko/orgs",
"repos_url": "https://api.github.com/users/vitaliyradchenko/repos",
"events_url": "https://api.github.com/users/vitaliyradchenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitaliyradchenko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5514?src=pr&el=h1) Report\n> Merging [#5514](https://codecov.io/gh/huggingface/transformers/pull/5514?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `1.73%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5514?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5514 +/- ##\n==========================================\n- Coverage 77.83% 76.10% -1.74% \n==========================================\n Files 141 141 \n Lines 24634 24634 \n==========================================\n- Hits 19175 18748 -427 \n- Misses 5459 5886 +427 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5514?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `20.49% <0.00%> (-55.49%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+1.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5514?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5514?src=pr&el=footer). Last update [58cca47...95bbeef](https://codecov.io/gh/huggingface/transformers/pull/5514?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5514/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5514",
"html_url": "https://github.com/huggingface/transformers/pull/5514",
"diff_url": "https://github.com/huggingface/transformers/pull/5514.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5514.patch",
"merged_at": 1594118424000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5513 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5513/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5513/comments | https://api.github.com/repos/huggingface/transformers/issues/5513/events | https://github.com/huggingface/transformers/issues/5513 | 650,852,405 | MDU6SXNzdWU2NTA4NTI0MDU= | 5,513 | Fail in some tests (with detailed description) | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It seems most of the tests that fail are due to `CUDA error: out of memory`?",
"> It seems most of the tests that fail are due to `CUDA error: out of memory`?\r\n\r\nYes. As the message shows, they are due to `CUDA error: out of memory`. \r\nBut I am sure that **no** other processes are using CUDA **before and after** I run the test. \r\nMy machine is with 4 Tesla V100 GPUs. And it works fine with other programs using both single and multi GPUs (e.g. fairseq). \r\nSo I am confused with the error messages. \r\n\r\nBesides, there are some other error messages that show `invalid size` and `INTERNAL ASSERT FAILED`.\r\n\r\nI have tried many environments. All of them fail in `test_multigpu_data_parallel_forward` due to **different** errors which are mainly `out of memory` and `invalid size`. \r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | # โ Questions & Help
I have followed the README to install transformers.
However, there are some failures in the test. (mainly about multigpu_data_paralle_forward)
```
======================================================================================================================== short test summary info =========================================================================================================================
FAILED tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward - RuntimeError: tensor.ndimension() == static_cast<int64_t>(expected_size.size()) INTERNAL ASSERT FAILED at /pytorch/torch/csrc/cuda/comm.cpp:225, please report a bug to ...
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_multigpu_data_parallel_forward - RuntimeError: Gather got an input of invalid size: got [2, 2, 4, 7, 8], but expected [2, 4, 4, 7, 8] (gather at /pytorch/torch/csrc/cuda/comm.cpp:231)
FAILED tests/test_modeling_bart.py::BartHeadTests::test_lm_forward - timeout_decorator.timeout_decorator.TimeoutError: 'Timed Out'
FAILED tests/test_modeling_ctrl.py::CTRLModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA error: out of memory
FAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA error: out of memory
FAILED tests/test_modeling_openai.py::OpenAIGPTModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.78 GiB total capacity; 5.33 MiB already allocated; 17.19 MiB free; 6.00 MiB reserved in total ...
FAILED tests/test_modeling_mobilebert.py::MobileBertModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA error: out of memory
================================================================================================= 7 failed, 1066 passed, 562 skipped, 1396 warnings in 333.27s (0:05:33) =================================================================================================
```
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I have tried many environments:
- conda: python3.7 + torch1.5.1 + cuda10.2
- venv: python3.7 + torch1.5.1 + cuda10.2
- conda: python3.6 + torch1.4.0 + cuda10.1
- venv: python3.6 + torch1.4.0 + cuda10.1
- conda: python3.7 + torch1.5.1 + cuda10.2 + apex
- venv: python3.6 + torch1.4.0 + cuda10.1 + apex
And I installed `transformers` **from source**.
All of them failed in 5-7 tests.
The `short test summary info` outputs of them are not all the same.
But **all** of them are related to `test_multigpu_data_parallel_forward`.
One of the outputs of `transformers-cli env` is as the following
(with the setting: venv: python3.6 + torch1.4.0 + cuda10.1 + apex):
```
The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 3.0.1
- Platform: Linux-4.15.0-1067-azure-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.11
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
For my case (just fill out the two last points),
```
- `transformers` version: 3.0.1
- Platform: Linux-4.15.0-1067-azure-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.11
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
```
For this case, the `short test summary info` shows:
```
======================================================================================================================== short test summary info =========================================================================================================================
FAILED tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward - RuntimeError: tensor.ndimension() == static_cast<int64_t>(expected_size.size()) INTERNAL ASSERT FAILED at /pytorch/torch/csrc/cuda/comm.cpp:225, please report a bug to ...
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_multigpu_data_parallel_forward - RuntimeError: Gather got an input of invalid size: got [2, 2, 4, 7, 8], but expected [2, 4, 4, 7, 8] (gather at /pytorch/torch/csrc/cuda/comm.cpp:231)
FAILED tests/test_modeling_bart.py::BartHeadTests::test_lm_forward - timeout_decorator.timeout_decorator.TimeoutError: 'Timed Out'
FAILED tests/test_modeling_ctrl.py::CTRLModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA error: out of memory
FAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA error: out of memory
FAILED tests/test_modeling_openai.py::OpenAIGPTModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.78 GiB total capacity; 5.33 MiB already allocated; 17.19 MiB free; 6.00 MiB reserved in total ...
FAILED tests/test_modeling_mobilebert.py::MobileBertModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA error: out of memory
================================================================================================= 7 failed, 1066 passed, 562 skipped, 1396 warnings in 333.27s (0:05:33) =================================================================================================
```
Some failures are about CUDA memory.
Seems weird, especially the following one.
```
FAILED tests/test_modeling_openai.py::OpenAIGPTModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.78 GiB total capacity; 5.33 MiB already allocated; 17.19 MiB free; 6.00 MiB reserved in total ...
```
So I checked the output of `nvidia-smi` **before and after** the test to make sure that **no other** processes hold memory.
```
Sat Jul 4 08:49:02 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-PCIE... Off | 00000217:00:00.0 Off | 0 |
| N/A 30C P0 37W / 250W | 0MiB / 16160MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-PCIE... Off | 00001C95:00:00.0 Off | 0 |
| N/A 30C P0 36W / 250W | 0MiB / 16160MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla V100-PCIE... Off | 0000735E:00:00.0 Off | 0 |
| N/A 31C P0 39W / 250W | 0MiB / 16160MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla V100-PCIE... Off | 0000AC50:00:00.0 Off | 0 |
| N/A 31C P0 36W / 250W | 0MiB / 16160MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
The **details of the failures** are too long to post here.
Please refer to [this page](https://github.com/xx-zhou16/tmp_issue/blob/master/README.md) for details.
## Related
I find a similar issue. #5070
I think the only difference between his environment and mine is the version of transformers.
The failures of his and mine are both about `test_multigpu_data_parallel_forward`.
But #5070 has not been solved.
----
If you need me to provide other details, please reply to this issue. I will reply ASAP.
I am focusing on this issue recently.
I would be very glad if anyone can help solve this problem.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5513/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5512 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5512/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5512/comments | https://api.github.com/repos/huggingface/transformers/issues/5512/events | https://github.com/huggingface/transformers/pull/5512 | 650,835,034 | MDExOlB1bGxSZXF1ZXN0NDQ0MjQ0OTAz | 5,512 | Allow tests in examples to use cuda or fp16,if they are available | {
"login": "Joel-hanson",
"id": 17215044,
"node_id": "MDQ6VXNlcjE3MjE1MDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/17215044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Joel-hanson",
"html_url": "https://github.com/Joel-hanson",
"followers_url": "https://api.github.com/users/Joel-hanson/followers",
"following_url": "https://api.github.com/users/Joel-hanson/following{/other_user}",
"gists_url": "https://api.github.com/users/Joel-hanson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Joel-hanson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Joel-hanson/subscriptions",
"organizations_url": "https://api.github.com/users/Joel-hanson/orgs",
"repos_url": "https://api.github.com/users/Joel-hanson/repos",
"events_url": "https://api.github.com/users/Joel-hanson/events{/privacy}",
"received_events_url": "https://api.github.com/users/Joel-hanson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sshleifer Hope you are doing well. sorry for the delay, I have created the PR for some of the related issues which were mentioned in #5057.\r\n> 1. They never use Cuda or fp16, even if they are available.\r\n\r\nI have some doubts which had encountered when making this PR\r\n\r\n---\r\n**1.** As you had said there where some test which failed when enabling the Cuda or fp16\r\n- The Cuda and fp16 are not enabled for question-answering example (`run_squad.py`) as it is having a difference in the f1 score.\r\n- The language-modeling example (`run_language_modeling.py`) is having an issue when running with fp16.\r\n---\r\n**2.** I was not able to find the `test_hans.py` but was able to find a readme to run it. Is this intentional if not shall I have a `test_hans.py` file to run the same.\r\n\r\n\r\n---\r\n**3.** This is the list of tests which I got\r\n```bash\r\n$ ls -l examples/**/test*.py\r\n\r\nexamples/bert-loses-patience/test_run_glue_with_pabee.py\r\nexamples/seq2seq/bertabs/test_utils_summarization.py\r\nexamples/seq2seq/test_seq2seq_examples.py\r\nexamples/test_examples.py\r\nexamples/token-classification/test_ner_examples.py\r\n```\r\nI was not able to find the `test_summarization_examples.py` and `test_t5_examples.py`. I think I am doing something wrong.\r\n\r\n---\r\n**4.** I can have made the PR for some tests only, can do the same for others if the current PR satisfies your requirement. ",
"1) noted\r\n2)test_hans.py would be nice, but can be added in a separate PR.\r\n3) You are all set, nothing is wrong. \r\n4) Would love to see the current PR!\r\n\r\nThanks!",
"> 2)test_hans.py would be nice, but can be added in a separate PR.\r\n\r\nThe test_hans.py was created once via the PR https://github.com/huggingface/transformers/pull/2239 but I think somehow or for some reason, it was removed\r\n\r\nlast interaction with the file was in the PR https://github.com/huggingface/transformers/pull/4213",
"\r\nalso I missed this earlier, but what was the issue with run_language_modeling.py? Can you get a traceback?",
"> also I missed this earlier, but what was the issue with run_language_modeling.py? Can you get a traceback?\r\n\r\nThis is the short long and I have attached the pull log with this comment.\r\n```log\r\n with patch.object(sys, \"argv\", testargs):\r\n result = run_language_modeling.main()\r\n> self.assertLess(result[\"perplexity\"], 35)\r\nE AssertionError: 36.684365356893885 not less than 35\r\n```\r\n[test.log](https://github.com/huggingface/transformers/files/4955269/test.log)\r\n",
"@sshleifer Hope you are doing well.\r\n\r\nIs this PR ok or Should I split this into multiple smaller PRs to make things easier?",
"Thanks for the review @sshleifer, I have updated the code accordingly and regarding the new command line args, I have removed the args which were related to the multi-GPU support as we can make that a separate PR. \r\n\r\n**Any Idea why the test `run_tests_tf` and `run_tests_torch_and_tf` failing? Is it related to my change?**",
"Thanks for the review @LysandreJik, I have updated the PR accordingly.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5512?src=pr&el=h1) Report\n> Merging [#5512](https://codecov.io/gh/huggingface/transformers/pull/5512?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f?el=desc) will **decrease** coverage by `0.72%`.\n> The diff coverage is `78.87%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5512?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5512 +/- ##\n==========================================\n- Coverage 80.37% 79.64% -0.73% \n==========================================\n Files 156 156 \n Lines 28058 28261 +203 \n==========================================\n- Hits 22552 22509 -43 \n- Misses 5506 5752 +246 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5512?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0.00% <0.00%> (รธ)` | |\n| [src/transformers/commands/user.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy91c2VyLnB5) | `0.00% <รธ> (รธ)` | |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <รธ> (รธ)` | |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `48.91% <0.00%> (-0.18%)` | :arrow_down: |\n| [src/transformers/data/test\\_generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Rlc3RfZ2VuZXJhdGlvbl91dGlscy5weQ==) | `0.00% <0.00%> (รธ)` | |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <รธ> (รธ)` | |\n| [src/transformers/hf\\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `67.74% <0.00%> (-1.49%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.73% <รธ> (รธ)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <รธ> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `82.13% <รธ> (รธ)` | |\n| ... and [58 more](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5512?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5512?src=pr&el=footer). Last update [a573777...51be82f](https://codecov.io/gh/huggingface/transformers/pull/5512?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,598 | 1,598 | CONTRIBUTOR | null | The tests in examples didn't use the Cuda or fp16 even if they were available.
- The text classification example (`run_glue.py`) didn't use the fp16 even if it was available but
the device was take based on the availability(Cuda/CPU).
- The language-modeling example (`run_language_modeling.py`) was having `--no_cuda` argument which
made the test work without Cuda. This example is having an issue when running with fp16
thus it not enabled (got an assertion error for perplexity due to its higher value).
- The Cuda and fp16 is not enabled for question-answering example (`run_squad.py`) as it is having a
the difference in the f1 score.
- The text-generation example (`run_generation.py`) will take the Cuda or fp16 whenever it is available.
Resolves some of #5057 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5512/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5512",
"html_url": "https://github.com/huggingface/transformers/pull/5512",
"diff_url": "https://github.com/huggingface/transformers/pull/5512.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5512.patch",
"merged_at": 1598349728000
} |
https://api.github.com/repos/huggingface/transformers/issues/5511 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5511/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5511/comments | https://api.github.com/repos/huggingface/transformers/issues/5511/events | https://github.com/huggingface/transformers/pull/5511 | 650,827,632 | MDExOlB1bGxSZXF1ZXN0NDQ0MjM5NTMx | 5,511 | example code missing `encode` | {
"login": "mrleu",
"id": 40532483,
"node_id": "MDQ6VXNlcjQwNTMyNDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/40532483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrleu",
"html_url": "https://github.com/mrleu",
"followers_url": "https://api.github.com/users/mrleu/followers",
"following_url": "https://api.github.com/users/mrleu/following{/other_user}",
"gists_url": "https://api.github.com/users/mrleu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrleu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrleu/subscriptions",
"organizations_url": "https://api.github.com/users/mrleu/orgs",
"repos_url": "https://api.github.com/users/mrleu/repos",
"events_url": "https://api.github.com/users/mrleu/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrleu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5511?src=pr&el=h1) Report\n> Merging [#5511](https://codecov.io/gh/huggingface/transformers/pull/5511?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `0.41%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5511?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5511 +/- ##\n==========================================\n- Coverage 77.83% 77.42% -0.42% \n==========================================\n Files 141 141 \n Lines 24634 24634 \n==========================================\n- Hits 19175 19072 -103 \n- Misses 5459 5562 +103 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5511?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `82.99% <0.00%> (-6.13%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.94% <0.00%> (-5.27%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.44% <0.00%> (-1.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5511?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5511?src=pr&el=footer). Last update [58cca47...68a7acd](https://codecov.io/gh/huggingface/transformers/pull/5511?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"found out the doc, calling it should work."
] | 1,593 | 1,593 | 1,593 | NONE | null | following example, and realized by running
```
In [27]: encoding = tokenizer(text_batch, return_tensors='pt', padding=True, truncation=True)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-27-c8731cd264f2> in <module>
----> 1 encoding = tokenizer(text_batch, return_tensors='pt', padding=True, truncation=True)
TypeError: 'BertTokenizer' object is not callable
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5511/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5511",
"html_url": "https://github.com/huggingface/transformers/pull/5511",
"diff_url": "https://github.com/huggingface/transformers/pull/5511.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5511.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5510 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5510/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5510/comments | https://api.github.com/repos/huggingface/transformers/issues/5510/events | https://github.com/huggingface/transformers/pull/5510 | 650,817,931 | MDExOlB1bGxSZXF1ZXN0NDQ0MjMyNDg5 | 5,510 | Fix typo in training | {
"login": "ELanning",
"id": 38930062,
"node_id": "MDQ6VXNlcjM4OTMwMDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/38930062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ELanning",
"html_url": "https://github.com/ELanning",
"followers_url": "https://api.github.com/users/ELanning/followers",
"following_url": "https://api.github.com/users/ELanning/following{/other_user}",
"gists_url": "https://api.github.com/users/ELanning/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ELanning/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ELanning/subscriptions",
"organizations_url": "https://api.github.com/users/ELanning/orgs",
"repos_url": "https://api.github.com/users/ELanning/repos",
"events_url": "https://api.github.com/users/ELanning/events{/privacy}",
"received_events_url": "https://api.github.com/users/ELanning/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5510?src=pr&el=h1) Report\n> Merging [#5510](https://codecov.io/gh/huggingface/transformers/pull/5510?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `0.41%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5510?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5510 +/- ##\n==========================================\n- Coverage 77.83% 77.42% -0.41% \n==========================================\n Files 141 141 \n Lines 24634 24634 \n==========================================\n- Hits 19175 19074 -101 \n- Misses 5459 5560 +101 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5510?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5510?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5510?src=pr&el=footer). Last update [58cca47...121648f](https://codecov.io/gh/huggingface/transformers/pull/5510?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks a lot!"
] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | Fixes a small typo in the code example.
Ref code: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1211 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5510/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5510",
"html_url": "https://github.com/huggingface/transformers/pull/5510",
"diff_url": "https://github.com/huggingface/transformers/pull/5510.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5510.patch",
"merged_at": 1594041298000
} |
https://api.github.com/repos/huggingface/transformers/issues/5509 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5509/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5509/comments | https://api.github.com/repos/huggingface/transformers/issues/5509/events | https://github.com/huggingface/transformers/issues/5509 | 650,792,768 | MDU6SXNzdWU2NTA3OTI3Njg= | 5,509 | TPU Trainer memory leak and memory requirements | {
"login": "misrasaurabh1",
"id": 1271289,
"node_id": "MDQ6VXNlcjEyNzEyODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1271289?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/misrasaurabh1",
"html_url": "https://github.com/misrasaurabh1",
"followers_url": "https://api.github.com/users/misrasaurabh1/followers",
"following_url": "https://api.github.com/users/misrasaurabh1/following{/other_user}",
"gists_url": "https://api.github.com/users/misrasaurabh1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/misrasaurabh1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/misrasaurabh1/subscriptions",
"organizations_url": "https://api.github.com/users/misrasaurabh1/orgs",
"repos_url": "https://api.github.com/users/misrasaurabh1/repos",
"events_url": "https://api.github.com/users/misrasaurabh1/events{/privacy}",
"received_events_url": "https://api.github.com/users/misrasaurabh1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @misrasaurabh1 I have also faced this issue. One immediate solution is to use lazy loading dataset, as xla by default loads the dataset in all processes. Something like `nlp` can really help here. as posted by @thomwolf here https://gist.github.com/thomwolf/13ca2b2b172b2d17ac66685aa2eeba62\r\n\r\n`nlp` can load full english wikipedia dataset (17 GB+) in just 9 MB of RAM. However I've not been able to use it with TPU's yet.",
"Very interesting, if one could load data in a lazy-loading manner using `nlp` it would help a lot! Although I am not sure if we can use our custom datasets with `nlp`.\r\n\r\nIf there was a way to use a shared memory between different processes or each process reading independently from memory-mapped on drive, that could solve this problem.",
"It's possible to use custom datasets with `nlp`. This section can help you https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset\r\n\r\nWe can create dataset as described in above steps, and then just pass the path of created dataset to `nlp.load_dataset` ",
"Although I still want to highlight that this bug is still unfixed and a big problem. Drawing attention from the HuggingFace team!",
"Update on how I solved the large memory usage. Because I just wanted one copy of the features in memory, so I used a redis-server locally to cache the features in memory in a separate process. I am also using unix sockets to make connections faster. Items stored in Redis have keys equal to the array index and value equal to the pickled feature dictionary. In the __getitem__ I am unpickling the data on the fly.\r\nThis reduced memory consumption from 100s of GBs to 30GB. Surprisingly, the training also became 20% faster after using redis-server.",
"Thanks for sharing this @misrasaurabh1 ",
"Thanks for letting us know @misrasaurabh1! We're very close to having TPU CI, only a few days away to a week, so we'll be able to monitor such metrics more closely.",
"Although the memory still keeps increasing as the training progresses. If it becomes too much of an issue for a reliable training pipeline I can dig deeper.",
"The memory spikes up everytime evaluation loop happens.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,602 | 1,602 | CONTRIBUTOR | null | # ๐ Bug
The TPU Trainer for Pytorch is extremely memory inefficient as it eats up lot of CPU memory while training. Moreover the training pipeline has a memory leak somewhere that leads to increased memory utilization as the training proceeds.
## Information
I am using a huge machine (n1-highmem-32) with 208 GB of RAM. I am trying to train seq-seq models like T5-base and BART-base with 880,000 examples. The problem is that with xla_spawn.py method of training models, multiprocessing starts 8 processes that all load up the features separately, resulting in 8x the memory usage than if they all shared the memory. Also this scaling of memory restricts what TPU one can use, I cannot imagine having to use a 1 TB RAM machine just to use v2-32 instance with 32 processes! I am trying to productionize a training pipeline with TPUs and the memory scaling problem makes it really hard and costly. TPUs are designed for training really large models and with large datasets but this memory issue is a large bottleneck against it. I could train with larger datasets but the memory usage prevents me.
In the images below one can see that for BART training, the memory utilization at the beginning is 80% but it rises to 100% in 14 hours, killing the training. I have to train the model 3 times to make progress.


This is the memory usage with T5. It starts with 45% and ends with 75%.

Could you consider rearchitecting the TPU Trainer so that it does not takes humongous amounts of RAM? Maybe a streaming or lazy loading architecture that only loads the features as and when they are required? Also multiprocessing is inherently not scalable, does one launch 1024 processes to train on a 1024 Core TPU Pod? I have made a TPU Training pipeline with TensorFlow 1 previously and it worked spectacularly effectively with its architecture of TFRecords and Tensorflow handling all data streaming through Google storage. The pipeline performance scaled linearly with the number of TPU Cores without consuming large memory. (Image attached below).

A scalable pipeline would make Huggingface a real contender for production level TPU PyTorch training pipelines.
Model I am using (Bert, XLNet ...): T5, BART
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Use the official example at https://github.com/huggingface/transformers/tree/master/examples#running-on-tpus
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Memory usage contained and does not increase with time.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-5.3.0-1026-gcp-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0a0+4121d34 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: Yes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5509/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5509/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5508 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5508/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5508/comments | https://api.github.com/repos/huggingface/transformers/issues/5508/events | https://github.com/huggingface/transformers/issues/5508 | 650,781,481 | MDU6SXNzdWU2NTA3ODE0ODE= | 5,508 | T5 Masking: | {
"login": "zbush548",
"id": 61605741,
"node_id": "MDQ6VXNlcjYxNjA1NzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/61605741?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zbush548",
"html_url": "https://github.com/zbush548",
"followers_url": "https://api.github.com/users/zbush548/followers",
"following_url": "https://api.github.com/users/zbush548/following{/other_user}",
"gists_url": "https://api.github.com/users/zbush548/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zbush548/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zbush548/subscriptions",
"organizations_url": "https://api.github.com/users/zbush548/orgs",
"repos_url": "https://api.github.com/users/zbush548/repos",
"events_url": "https://api.github.com/users/zbush548/events{/privacy}",
"received_events_url": "https://api.github.com/users/zbush548/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`min_length` corresponds to the minimum number of tokens, not words.\r\n\r\nFeel free to reopen if this does not answer your question."
] | 1,593 | 1,594 | 1,594 | NONE | null | While this works, the outputs are not according to what is specified in "model.generate."
```
# tokenizer = T5Tokenizer.from_pretrained(model_name)
# model = T5ForConditionalGeneration.from_pretrained(model_name)
# # Input text
# text = 'This <extra_id_0> sentence. </s>'
# encoded = tokenizer.encode_plus(text, add_special_tokens=True, return_tensors='pt')
# input_ids = encoded['input_ids']
# # Generating 20 sequences with maximum length set to 5
# outputs = model.generate(input_ids=input_ids,
# num_beams=200, num_return_sequences=20,
# min_length = 5, max_length=10)
# _0_index = text.index('<extra_id_0>')
# _result_prefix = text[:_0_index]
# _result_suffix = text[_0_index+12:] # 12 is the length of <extra_id_0>
# def _filter(output, end_token='<extra_id_1>'):
# # The first token is <unk> (inidex at 0) and the second token is <extra_id_0> (indexed at 32099)
# _txt = tokenizer.decode(output[2:], skip_special_tokens=False, clean_up_tokenization_spaces=False)
# if end_token in _txt:
# _end_token_index = _txt.index(end_token)
# return _result_prefix + _txt[:_end_token_index] + _result_suffix
# else:
# return _result_prefix + _txt + _result_suffix
# results = list(map(_filter, outputs))
```
Example Output:
`This perception holds no validity.`
As can be seen, "holds no" is only two words despite that I put the "min_length" as five. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5508/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5507 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5507/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5507/comments | https://api.github.com/repos/huggingface/transformers/issues/5507/events | https://github.com/huggingface/transformers/issues/5507 | 650,776,509 | MDU6SXNzdWU2NTA3NzY1MDk= | 5,507 | What's the correct way to use add_prefix_space for the fast RoBERTa tokenizer in 3.0.0/3.0.1? | {
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Thanks for flagging, the docstrings were indeed very confusing!\r\nI hope the new doc introduced in #5559 is clearer and solves your problem. Please reopen if needed.",
"Thanks for the clarification in the new doc! Does it mean RoBERTa was pre-trained **without** this add_prefix_space behavior? In other words, in the original model, does it encode \"Hello world\" with 2 or 3 tokens, excluding special tokens? If it was pre-trained without adding the prefix space, does it mean the **fast tokenizers** in the pre-3.0.0 transformers versions were adding this prefix space incorrectly?",
"E.g. in 2.11.0\r\n```\r\n>>> from transformers import RobertaTokenizerFast\r\n>>> tokenizer = RobertaTokenizerFast.from_pretrained(\"roberta-base\")\r\n>>> tokenizer.tokenize(\"Hello world\")\r\n['ฤ Hello', 'ฤ world']\r\n```",
"I think it was incorrect, yes, since it currently returns `['Hello', 'ฤ world']` like the slow tokenizer. Multiple bugs in fast tokenizer were fixed in 3.0.0."
] | 1,593 | 1,604 | 1,594 | CONTRIBUTOR | null | The documentation says we should pass this flag to **the encoding methods**.
https://github.com/huggingface/transformers/blob/58cca47c16149e43d1b516623d59e3c5d97f695e/src/transformers/tokenization_roberta.py#L259-L260
However, the encoding methods don't take such an argument, and passing it in actually causes an error:
```
File "test.py", line 127, in func
self.tokenizer.encode(token, add_special_tokens=False, add_prefix_space=True)
File ".../transformers/tokenization_utils_base.py", line 1425, in encode
**kwargs,
File ".../transformers/tokenization_utils_base.py", line 1737, in encode_plus
**kwargs,
File ".../transformers/tokenization_gpt2.py", line 377, in _encode_plus
return super()._encode_plus(*args, **kwargs)
File ".../transformers/tokenization_utils_fast.py", line 420, in _encode_plus
**kwargs,
File ".../transformers/tokenization_gpt2.py", line 367, in _batch_encode_plus
return super()._batch_encode_plus(*args, **kwargs)
File ".../transformers/tokenization_utils_fast.py", line 313, in _batch_encode_plus
raise ValueError(f"Keyword arguments {kwargs} not recognized.")
ValueError: Keyword arguments {'add_prefix_space': True} not recognized.
```
Then, I assumed that this documentation was a typo and we should pass this argument to `__init__`. But why is it default to `False` in `__init__`? Don't we pretty much always need to add a prefix space? Does `from_pretrained` set it to `True` automatically? If not, should we always do `AutoTokenizer.from_pretrained(..., use_prefix_space=True)`?
https://github.com/huggingface/transformers/blob/58cca47c16149e43d1b516623d59e3c5d97f695e/src/transformers/tokenization_roberta.py#L299-L314 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5507/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5506 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5506/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5506/comments | https://api.github.com/repos/huggingface/transformers/issues/5506/events | https://github.com/huggingface/transformers/issues/5506 | 650,756,275 | MDU6SXNzdWU2NTA3NTYyNzU= | 5,506 | Why is `encoder_extended_attention_mask = None` when `config.is_decoder == False` | {
"login": "UsmannK",
"id": 8558782,
"node_id": "MDQ6VXNlcjg1NTg3ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8558782?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/UsmannK",
"html_url": "https://github.com/UsmannK",
"followers_url": "https://api.github.com/users/UsmannK/followers",
"following_url": "https://api.github.com/users/UsmannK/following{/other_user}",
"gists_url": "https://api.github.com/users/UsmannK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/UsmannK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/UsmannK/subscriptions",
"organizations_url": "https://api.github.com/users/UsmannK/orgs",
"repos_url": "https://api.github.com/users/UsmannK/repos",
"events_url": "https://api.github.com/users/UsmannK/events{/privacy}",
"received_events_url": "https://api.github.com/users/UsmannK/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The `encoder_attention_mask` is only relevant if BERT is uses as a Encoder-Decoder model using the `EncoderDecoderModel` wrapper class. In this case the decoder should be able to accept an `encoder_attention_mask` for its cross-attention layers. \r\n\r\nIn all other cases this mask is not relevant and should be set to None. \r\n\r\nI agree that the check `if self.is_decoder` is probably not the best one here it should rather be `if self.is_encoder_decoder and self.is_decoder`. will update this soon.\r\n\r\nFeel free to reopen if this does not answer your question",
"Hi Patrick, thanks for the swift response. Iโm not sure if I understand: shouldnโt we always want to mask the padded tokens, even in the encoder?\r\n\r\nIn fact the canonical BERT model suggests this, where they have no such check: https://github.com/google-research/bert/blob/master/modeling.py#L200",
"@patrickvonplaten Sorry for the noise. Noticed you said to reopen the issue but I think only maintainers have this permission :)",
"This `encoder_attention_mask` is only relevent for a Bert EncoderDecoder model. It is not the same as the usual `attention_mask`",
"Ah, I see. Looking again at the code I definitely misunderstood that. Thanks a ton."
] | 1,593 | 1,594 | 1,594 | NONE | null | Potential Bug(?)
Reading the codebase I see that attention masks are ignored for many of the pretrained model configs such as `'bert-base-uncased'`. We can see [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L743) that the attention mask is simply cleared out. Is this intentional?
```
from transformers import BertModel
config_path = 'bert-base-uncased'
config = BertModel.config_class.from_pretrained(config_path)
print(f'is_decoder: {config.is_decoder}')
```
outputs False | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5506/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5506/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5505 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5505/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5505/comments | https://api.github.com/repos/huggingface/transformers/issues/5505/events | https://github.com/huggingface/transformers/issues/5505 | 650,747,379 | MDU6SXNzdWU2NTA3NDczNzk= | 5,505 | 3.0.1 BertTokenizer batch_encode_plus() shows warnings "Truncation was not explicitely activated but `max_length` is provided a specific value" | {
"login": "githubrandomuser2017",
"id": 25097908,
"node_id": "MDQ6VXNlcjI1MDk3OTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/25097908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/githubrandomuser2017",
"html_url": "https://github.com/githubrandomuser2017",
"followers_url": "https://api.github.com/users/githubrandomuser2017/followers",
"following_url": "https://api.github.com/users/githubrandomuser2017/following{/other_user}",
"gists_url": "https://api.github.com/users/githubrandomuser2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/githubrandomuser2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/githubrandomuser2017/subscriptions",
"organizations_url": "https://api.github.com/users/githubrandomuser2017/orgs",
"repos_url": "https://api.github.com/users/githubrandomuser2017/repos",
"events_url": "https://api.github.com/users/githubrandomuser2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/githubrandomuser2017/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"same problem here",
"same problem here",
"I bumped the same problem. After a plenty of warning messages my colab stuck and web-page didn't answer. As a temporary solution I installed previous version of transformers library ```!pip install transformers==3.0.0``` and evething with ```BertTokenizer.batch_encode_plus()``` is okay.",
"The massive amount of generated warnings is crashing the Google Colab session in my Chrome browser. The only temporary fix I can find is to turn off all warning messages.\r\n\r\n```python\r\nimport logging\r\nlogging.basicConfig(level=logging.ERROR)\r\n```",
"same even when the `batch_encode_plus` method is not called (directly calling `tokenizer()` with a list of string)",
"Ok indeed, we are on it and will release a fix soon. Sorry for that.",
"Tested in 3.0.2 - warning persists.",
"Can you post the command line (tokenizer call) you are running?",
"Same here, have tried myriad combinations to try to eliminate the warning.\r\n\r\n```\r\nmodel = Seq2SeqModel(\r\n encoder_type,\r\n \"roberta-base\",\r\n \"bert-base-cased\",\r\n args=model_args,\r\n truncation=True,\r\n use_cuda=cuda_available\r\n)\r\n```\r\n\r\n> WARNING:transformers.tokenization_utils_base:Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.\r\n> WARNING:transformers.tokenization_utils_base:Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.\r\n> WARNING:transformers.tokenization_utils_base:Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.\r\n> WARNING:transformers.tokenization_utils_base:Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.\r\n",
"I can't find the class `Seq2SeqModel` on our repo so I can't reproduce your code.\r\n\r\nYou need to open a new issue with all the details so I can try to reproduce and debug this.",
"Should only print it once"
] | 1,593 | 1,597 | 1,594 | NONE | null | # ๐ Bug
## Information
Model I am using (Bert, XLNet ...): I'm using Transformers 3.0.1 and the BERT model to do two-sentence NLI-style classification.
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I have a question and two issues:
1. What happened to the `BertTokenizer.encode_plus()` and `BertTokenizer.batch_encode_plus()` methods? I see there must have been a change somewhere to remove them in Transformers 3.0.0, but I cannot find any online change log or other description of what the replacement methods are.
2. This issue I am getting is that there are a lot of repeated messages stating
> Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
This issue is apparently the same as the closed issue #5377. However, that one was with respect to `encode_plus ()`, **but my issue is with `batch_encode_plus()`**.
3. Please fix the typo: `Truncation was not explicitely ...` to be `Truncation was not explicitly ...`.
## To reproduce
Steps to reproduce the behavior for `batch_encode_plus()`:
I adapted the code from #5377:
```python
import transformers
print(transformers.__version__)
# 3.0.1
from transformers import BertTokenizer
max_len = 50
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# This example works fine with encode_plus().
text = 'Transformers are more than meets the eye'
encoded_dict = tokenizer.encode_plus(text,
text_pair=None,
add_special_tokens=True,
max_length=max_len,
pad_to_max_length=False,
truncation='longest_first')
# This call to batch_encode_plus() shows warnings.
list_of_pair_of_string = []
list_of_pair_of_string.append( ('Transformers are', 'more than meets the eye') )
encoded_dict = tokenizer.batch_encode_plus(batch_text_or_text_pairs=list_of_pair_of_string,
add_special_tokens=True,
max_length=max_len,
pad_to_max_length=False,
truncation='longest_first'
#truncation=True
)
```
Note that for my call to `batch_encode_plus()`, I tried both `truncation='longest_first'` and also `truncation=True`.
However, the call always shows:
> Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The call to `batch_encode_plus()` should not show any warning because I specifically provided the `truncation=` parameter.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.1
- Platform: Google Colab
- Python version: 3.6.9 (default, Apr 18 2020, 01:56:04)
- PyTorch version (GPU?): 1.5.1+cu101
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5505/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5505/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5504 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5504/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5504/comments | https://api.github.com/repos/huggingface/transformers/issues/5504/events | https://github.com/huggingface/transformers/issues/5504 | 650,744,942 | MDU6SXNzdWU2NTA3NDQ5NDI= | 5,504 | Write With Transformers | {
"login": "zbush548",
"id": 61605741,
"node_id": "MDQ6VXNlcjYxNjA1NzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/61605741?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zbush548",
"html_url": "https://github.com/zbush548",
"followers_url": "https://api.github.com/users/zbush548/followers",
"following_url": "https://api.github.com/users/zbush548/following{/other_user}",
"gists_url": "https://api.github.com/users/zbush548/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zbush548/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zbush548/subscriptions",
"organizations_url": "https://api.github.com/users/zbush548/orgs",
"repos_url": "https://api.github.com/users/zbush548/repos",
"events_url": "https://api.github.com/users/zbush548/events{/privacy}",
"received_events_url": "https://api.github.com/users/zbush548/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | # ๐ Bug
## Information
Model: XLNet
Outputs are extremely unusual, and I suspect there is an issue.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5504/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5503 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5503/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5503/comments | https://api.github.com/repos/huggingface/transformers/issues/5503/events | https://github.com/huggingface/transformers/issues/5503 | 650,665,832 | MDU6SXNzdWU2NTA2NjU4MzI= | 5,503 | T5 Training on TPU doesnt use TPU | {
"login": "santhoshkolloju",
"id": 4193817,
"node_id": "MDQ6VXNlcjQxOTM4MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4193817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/santhoshkolloju",
"html_url": "https://github.com/santhoshkolloju",
"followers_url": "https://api.github.com/users/santhoshkolloju/followers",
"following_url": "https://api.github.com/users/santhoshkolloju/following{/other_user}",
"gists_url": "https://api.github.com/users/santhoshkolloju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/santhoshkolloju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/santhoshkolloju/subscriptions",
"organizations_url": "https://api.github.com/users/santhoshkolloju/orgs",
"repos_url": "https://api.github.com/users/santhoshkolloju/repos",
"events_url": "https://api.github.com/users/santhoshkolloju/events{/privacy}",
"received_events_url": "https://api.github.com/users/santhoshkolloju/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @santhoshkolloju what is the xla and pytorch version ?",
"Pytorch - 1.7.0a0+542ac74\r\ntorch_xla - 1.6+30b65e9\r\nThe setup file is taken from \r\nVERSION = \"nightly\" #@param [\"1.5\" , \"20200325\", \"nightly\"]\r\n!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py\r\n!python pytorch-xla-env-setup.py --version $VERSION",
"@patil-suraj i just realised that i was using the notebook created by you. Thanks for the notebook.\r\nDid anything has changed in the current xla library because of which this is happening? ",
"Hi @santhoshkolloju , yes there's been some changes to xla, I'm working on the fix, will let you know when I find it.\r\n\r\nAlso looks like you are trying to do question generation, I am going to release my own experiments by the end of the week. It's pretty exciting. Here's a relevant thread for QG #4399",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | I am trying to train T5 on TPU .Training started without errors but its very slow . I think its not using TPU backend.
https://colab.research.google.com/drive/10TN0zgPWCIAzbA0PKYIo3fF8wOvfa1P7?usp=sharing | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5503/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5502 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5502/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5502/comments | https://api.github.com/repos/huggingface/transformers/issues/5502/events | https://github.com/huggingface/transformers/issues/5502 | 650,626,185 | MDU6SXNzdWU2NTA2MjYxODU= | 5,502 | licens | {
"login": "TCUI22",
"id": 49901244,
"node_id": "MDQ6VXNlcjQ5OTAxMjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/49901244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TCUI22",
"html_url": "https://github.com/TCUI22",
"followers_url": "https://api.github.com/users/TCUI22/followers",
"following_url": "https://api.github.com/users/TCUI22/following{/other_user}",
"gists_url": "https://api.github.com/users/TCUI22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TCUI22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TCUI22/subscriptions",
"organizations_url": "https://api.github.com/users/TCUI22/orgs",
"repos_url": "https://api.github.com/users/TCUI22/repos",
"events_url": "https://api.github.com/users/TCUI22/events{/privacy}",
"received_events_url": "https://api.github.com/users/TCUI22/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | NONE | null | # ๐ Bug
## Information
Model I am using (Bert, XLNet ...):
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5502/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5501 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5501/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5501/comments | https://api.github.com/repos/huggingface/transformers/issues/5501/events | https://github.com/huggingface/transformers/pull/5501 | 650,602,171 | MDExOlB1bGxSZXF1ZXN0NDQ0MDYyMjc0 | 5,501 | Merge pull request #1 from huggingface/master | {
"login": "parmarsuraj99",
"id": 9317265,
"node_id": "MDQ6VXNlcjkzMTcyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9317265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parmarsuraj99",
"html_url": "https://github.com/parmarsuraj99",
"followers_url": "https://api.github.com/users/parmarsuraj99/followers",
"following_url": "https://api.github.com/users/parmarsuraj99/following{/other_user}",
"gists_url": "https://api.github.com/users/parmarsuraj99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parmarsuraj99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parmarsuraj99/subscriptions",
"organizations_url": "https://api.github.com/users/parmarsuraj99/orgs",
"repos_url": "https://api.github.com/users/parmarsuraj99/repos",
"events_url": "https://api.github.com/users/parmarsuraj99/events{/privacy}",
"received_events_url": "https://api.github.com/users/parmarsuraj99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | Updated | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5501/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5501",
"html_url": "https://github.com/huggingface/transformers/pull/5501",
"diff_url": "https://github.com/huggingface/transformers/pull/5501.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5501.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5500 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5500/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5500/comments | https://api.github.com/repos/huggingface/transformers/issues/5500/events | https://github.com/huggingface/transformers/issues/5500 | 650,584,256 | MDU6SXNzdWU2NTA1ODQyNTY= | 5,500 | batch_encode_plus model output is different from tokenizer.encode model's output | {
"login": "monk1337",
"id": 17107749,
"node_id": "MDQ6VXNlcjE3MTA3NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monk1337",
"html_url": "https://github.com/monk1337",
"followers_url": "https://api.github.com/users/monk1337/followers",
"following_url": "https://api.github.com/users/monk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monk1337/subscriptions",
"organizations_url": "https://api.github.com/users/monk1337/orgs",
"repos_url": "https://api.github.com/users/monk1337/repos",
"events_url": "https://api.github.com/users/monk1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/monk1337/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Indeed, if you are using padding you need to provide the attention masks to your model otherwise it doesn't know which tokens it should not attend to.\r\n\r\nHere is the correct version of `batch_encoding` which will give the same output as the non batched version:\r\n```python\r\ndef batch_encoding(sentences):\r\n \r\n inputs = tokenizer(sentences, padding=True, return_tensors='pt')\r\n print(inputs)\r\n outputs = model(**inputs)\r\n features = outputs[0][:,0,:].detach().numpy()\r\n \r\n return features\r\n```\r\nI've also updated it to the new tokenizers API on which you can learn a lot more in the tutorial here: https://huggingface.co/transformers/preprocessing.html",
"@thomwolf I am using the pre-trained distilbert model.\r\nencoded_batch = tokenizer.batch_encode_plus([\"hello\", \"there you\"], add_special_tokens=True, return_tensors=\"tf\", padding=True, truncation=True)\r\n`{'input_ids': <tf.Tensor: shape=(2, 4), dtype=int32, numpy=\r\narray([[ 101, 7592, 102, 0],\r\n [ 101, 2045, 2017, 102]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 4), dtype=int32, numpy=\r\narray([[1, 1, 1, 0],\r\n [1, 1, 1, 1]], dtype=int32)>}`\r\n\r\nencoded_batch[0][0] -> pointing to hidden states of \"hello\"\r\n<tf.Tensor: shape=(4, 768), dtype=float32, numpy=\r\narray([[-0.20557329, -0.18245512, 0.0950693 , ..., -0.06398913,\r\n 0.16745588, 0.37530535],\r\n [-0.48316494, -0.13651992, 0.3210112 , ..., 0.0130196 ,\r\n 0.27123356, 0.15390822],\r\n [ 0.8976611 , 0.14261621, -0.40148023, ..., 0.31185225,\r\n -0.68173647, -0.2594604 ],\r\n [-0.0716978 , -0.18830499, 0.35636497, ..., 0.09993267,\r\n -0.05575091, 0.14572877]], dtype=float32)>\r\n\r\n\r\nand if i encode \"hello\" alone, \r\n<tf.Tensor: shape=(3, 768), dtype=float32, numpy=\r\narray([[-0.20557335, -0.18245521, 0.09506968, ..., -0.06398894,\r\n 0.16745585, 0.37530527],\r\n [-0.48316532, -0.13651986, 0.32101193, ..., 0.0130194 ,\r\n 0.27123365, 0.15390885],\r\n [ 0.89766103, 0.14261642, -0.40148014, ..., 0.31185254,\r\n -0.68173563, -0.25946063]], dtype=float32)>\r\n\r\nso here, if you see, \r\nthere is one row extra due to padding in the above instance,\r\nand there is no way to figure out if that is SEP embeds or padded embeds\r\nand that can lead to discrepancies.\r\n\r\nIs there some way to send inputs to the model in a batch, keeping the embeddings intact?\r\n\r\n\r\nThanks in advance"
] | 1,593 | 1,597 | 1,594 | NONE | null | I am trying to encode multiple sentences with BertTokenizer. I tried batch_encode_plus but I am getting different output when I am feeding BertTokenizer's output vs batch_encode_plus's output to model.
```
single_sentence = 'checking single sentences'
sentences = ['checking single sentences', 'many sentences encoding together', 'hello world how and', 'All model checkpoint weights were used when initializing BertModel.']
# loading models
from transformers import BertModel, BertConfig, BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
model = BertModel.from_pretrained('bert-large-uncased')
# single encoding and hidden layer weights
def single_query(sentence):
single_input_id = torch.tensor(tokenizer.encode(sentence)).unsqueeze(0) # Batch size 1
outputs = model(single_input_id)
features = outputs[0][:,0,:].detach().numpy()
return features
def batch_encoding(sentences):
input_ids = torch.tensor(tokenizer.batch_encode_plus(sentences, pad_to_max_length=True)['input_ids'])
outputs = model(all_f)
features = outputs[0][:,0,:].detach().numpy()
return features
```
single_query(single_sentence) return :
```
array([[-0.39814326, -0.4902882 , 0.02681825, ..., -0.28256905,
-1.0546892 , 0.1055279 ]], dtype=float32)
```
while batch_enc = batch_encoding(sentences)[0] return :
```
array([ 0.1909762 , 0.05580305, 0.221862 , ..., 0.16220105,
-0.88524836, 0.12994497], dtype=float32)
```
Why there is difference in the model's output?
Is it because of padding? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5500/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5500/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5499 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5499/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5499/comments | https://api.github.com/repos/huggingface/transformers/issues/5499/events | https://github.com/huggingface/transformers/issues/5499 | 650,583,810 | MDU6SXNzdWU2NTA1ODM4MTA= | 5,499 | [ERROR] add_special_tokens = True not working in version 3.0.0 | {
"login": "1512262",
"id": 28997653,
"node_id": "MDQ6VXNlcjI4OTk3NjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/28997653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1512262",
"html_url": "https://github.com/1512262",
"followers_url": "https://api.github.com/users/1512262/followers",
"following_url": "https://api.github.com/users/1512262/following{/other_user}",
"gists_url": "https://api.github.com/users/1512262/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1512262/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1512262/subscriptions",
"organizations_url": "https://api.github.com/users/1512262/orgs",
"repos_url": "https://api.github.com/users/1512262/repos",
"events_url": "https://api.github.com/users/1512262/events{/privacy}",
"received_events_url": "https://api.github.com/users/1512262/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, can you provide more information (basically fill all the filed in the issue templates) in particular provide a clear example so we can try to reproduce the behavior?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | # ๐ Bug
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): Multi-Lingual
The problem arises when using:
* [x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
In version 2.11 Everything is OK but I updated to 3.0.0 and I can not create fixed_length with padding for the encoding of the text
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5499/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5498 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5498/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5498/comments | https://api.github.com/repos/huggingface/transformers/issues/5498/events | https://github.com/huggingface/transformers/issues/5498 | 650,573,062 | MDU6SXNzdWU2NTA1NzMwNjI= | 5,498 | What happened to https://huggingface.co/zero-shot/ ? | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
}
] | [
"We're back up, and should now be setup to auto-restart when it fails. Thanks for the heads up!",
"Awsome! Thanks.",
"Hi! It seems the page is down again.",
"Man, streamlit's killing me. Thanks, rebooting now.",
"Ouch, this means we users are indirectly killing you! Thanks a lot. ๐ ",
"This page is giving me a 502. reopen this issue, please",
"Back up, and fixed the issue with auto-relaunch. Thanks for the heads up.",
"Hi! Thanks for the great demo!\r\nThe page seems to be down again.",
"This time it was actually just me making some changes. Back up.",
"I am getting a timeout on it... so, still not available I think..."
] | 1,593 | 1,598 | 1,593 | CONTRIBUTOR | null | Hi, this page was very interesting.
https://huggingface.co/zero-shot/
It is down since yesterday.
What is up with it? Do you plan to bring it up again? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5498/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5497 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5497/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5497/comments | https://api.github.com/repos/huggingface/transformers/issues/5497/events | https://github.com/huggingface/transformers/pull/5497 | 650,572,128 | MDExOlB1bGxSZXF1ZXN0NDQ0MDM3ODM5 | 5,497 | [Generation] better error message | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5497?src=pr&el=h1) Report\n> Merging [#5497](https://codecov.io/gh/huggingface/transformers/pull/5497?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/49281ac9390e19f30c30a914b11aa55b561973d1&el=desc) will **decrease** coverage by `0.23%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5497?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5497 +/- ##\n==========================================\n- Coverage 77.82% 77.59% -0.24% \n==========================================\n Files 141 141 \n Lines 24617 24619 +2 \n==========================================\n- Hits 19159 19103 -56 \n- Misses 5458 5516 +58 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5497?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.71% <100.00%> (-1.47%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.01% <0.00%> (-5.11%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.89% <0.00%> (-1.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/5497/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5497?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5497?src=pr&el=footer). Last update [49281ac...3e5929c](https://codecov.io/gh/huggingface/transformers/pull/5497?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Should be trivial to merge this IMO. Is there any case why one would now receive an error if `max_length` < `cur_len` @sshleifer @yjernite ?",
"Is max_length still meant to include the length of input_ids?",
"yes! Would you change it to be added to the input?",
"I don't understand your response exactly. My preference is that `max_length` is completely independent of the ids sent to the encoder. That is how the current code works. For example, if we are summarizing a news article of length `X`, and max_length is 20, we should be allowed to generate 20 tokens regardless of `X`. \r\nFor the text generation use case, I care less, but I have the same opinion. `X.shape[1]` should not matter.\r\nI don't have an opinion on whether `decoder_start_token_id` counts.\r\nWhen does this assert get hit?",
"The assert hits at the moment only for text-generation of \"encoder\" only models (GPT2, XLNET, ...) if `input_ids.shape[1] >= max_length`. \r\nIn the case of all \"conditional\" generation (using an encoder + decoder, like Bart / T5) `max_length` is independent of the input to the encoder because auto-regressive generation is only done for the decoder => which is expected.\r\n\r\nThe question is whether for text generation of encoder only models like gpt2 `max_length` should be changed to something like `max_tokens_to_generate` in which case the limit would be `input_ids.shape[1] + max_tokens_to_generate` (independent of `input_ids`). Not sure whether it's more intuitive to define the number of max length tokens **to be generated** or better the max length of the complete text, also given the name `max_length`. ",
"I think `max_length` should refer to the maximum number of tokens that can be generated.\r\nFor example, if you wanted to make a next word suggester, it would be much simpler to have max_length=1, than it would be check how many input_ids != pad_token_id for each entry in your batch and then add 1 to that.\r\n\r\nI don't think there are as many use cases where you need N tokens regardless of the length of the input. "
] | 1,593 | 1,593 | 1,593 | MEMBER | null | If `cur_len` of input context is as long or longer than `max_length` a nice error message should be shown. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5497/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5497",
"html_url": "https://github.com/huggingface/transformers/pull/5497",
"diff_url": "https://github.com/huggingface/transformers/pull/5497.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5497.patch",
"merged_at": 1593797126000
} |
https://api.github.com/repos/huggingface/transformers/issues/5496 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5496/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5496/comments | https://api.github.com/repos/huggingface/transformers/issues/5496/events | https://github.com/huggingface/transformers/pull/5496 | 650,545,000 | MDExOlB1bGxSZXF1ZXN0NDQ0MDE1ODA2 | 5,496 | QA pipeline BART compatible | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5496?src=pr&el=h1) Report\n> Merging [#5496](https://codecov.io/gh/huggingface/transformers/pull/5496?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/21cd8c40862ba356096ab4cda31563ee3a35c1bb&el=desc) will **increase** coverage by `1.14%`.\n> The diff coverage is `85.71%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5496?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5496 +/- ##\n==========================================\n+ Coverage 76.39% 77.54% +1.14% \n==========================================\n Files 141 141 \n Lines 24617 24622 +5 \n==========================================\n+ Hits 18807 19092 +285 \n+ Misses 5810 5530 -280 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5496?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.34% <50.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.12% <100.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.01% <0.00%> (-5.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.92% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.72% <0.00%> (+73.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5496?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5496?src=pr&el=footer). Last update [21cd8c4...b716a86](https://codecov.io/gh/huggingface/transformers/pull/5496?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,594 | 1,594 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5496/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5496",
"html_url": "https://github.com/huggingface/transformers/pull/5496",
"diff_url": "https://github.com/huggingface/transformers/pull/5496.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5496.patch",
"merged_at": 1594300300000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5495 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5495/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5495/comments | https://api.github.com/repos/huggingface/transformers/issues/5495/events | https://github.com/huggingface/transformers/pull/5495 | 650,535,350 | MDExOlB1bGxSZXF1ZXN0NDQ0MDA3OTE3 | 5,495 | Typo fix in `training` doc | {
"login": "arnavsharma93",
"id": 1503614,
"node_id": "MDQ6VXNlcjE1MDM2MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1503614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnavsharma93",
"html_url": "https://github.com/arnavsharma93",
"followers_url": "https://api.github.com/users/arnavsharma93/followers",
"following_url": "https://api.github.com/users/arnavsharma93/following{/other_user}",
"gists_url": "https://api.github.com/users/arnavsharma93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnavsharma93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnavsharma93/subscriptions",
"organizations_url": "https://api.github.com/users/arnavsharma93/orgs",
"repos_url": "https://api.github.com/users/arnavsharma93/repos",
"events_url": "https://api.github.com/users/arnavsharma93/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnavsharma93/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5495?src=pr&el=h1) Report\n> Merging [#5495](https://codecov.io/gh/huggingface/transformers/pull/5495?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8438bab38e1ea60efca181c92ebc7e4602f91848&el=desc) will **increase** coverage by `0.76%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5495?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5495 +/- ##\n==========================================\n+ Coverage 76.97% 77.74% +0.76% \n==========================================\n Files 141 141 \n Lines 24617 24617 \n==========================================\n+ Hits 18950 19138 +188 \n+ Misses 5667 5479 -188 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5495?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5495/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `71.83% <0.00%> (-23.95%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5495/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.41% <0.00%> (-2.27%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5495/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.21% <0.00%> (+1.32%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5495/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5495/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5495?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5495?src=pr&el=footer). Last update [8438bab...74a8457](https://codecov.io/gh/huggingface/transformers/pull/5495?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for the fix!"
] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | `provides` -> `provided` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5495/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5495",
"html_url": "https://github.com/huggingface/transformers/pull/5495",
"diff_url": "https://github.com/huggingface/transformers/pull/5495.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5495.patch",
"merged_at": 1594041322000
} |
https://api.github.com/repos/huggingface/transformers/issues/5494 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5494/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5494/comments | https://api.github.com/repos/huggingface/transformers/issues/5494/events | https://github.com/huggingface/transformers/issues/5494 | 650,467,984 | MDU6SXNzdWU2NTA0Njc5ODQ= | 5,494 | The inference speed of gpt2-xl has a gap between pytorch and tensorflow. | {
"login": "zhm9484",
"id": 12964346,
"node_id": "MDQ6VXNlcjEyOTY0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/12964346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhm9484",
"html_url": "https://github.com/zhm9484",
"followers_url": "https://api.github.com/users/zhm9484/followers",
"following_url": "https://api.github.com/users/zhm9484/following{/other_user}",
"gists_url": "https://api.github.com/users/zhm9484/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhm9484/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhm9484/subscriptions",
"organizations_url": "https://api.github.com/users/zhm9484/orgs",
"repos_url": "https://api.github.com/users/zhm9484/repos",
"events_url": "https://api.github.com/users/zhm9484/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhm9484/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Turning off the eager mode of tensorflow makes inference much faster."
] | 1,593 | 1,593 | 1,593 | NONE | null | **Environment:**
- OS: Ubuntu 18.04
- Python: 3.7.6
- Transformers: 3.0.0
- PyTorch: 1.4.0
- Tensorflow: 2.2.0
- CUDA: 10.1
- CUDNN: 7.6
- GPU: V100
**My code:**
```import time
import torch
import tensorflow as tf
from transformers import AutoTokenizer, AutoModelWithLMHead, TFAutoModelWithLMHead
TIMES = 100
tokenizer = AutoTokenizer.from_pretrained("./gpt2-xl")
# pytorch
model = AutoModelWithLMHead.from_pretrained("./gpt2-xl")
model = model.to("cuda")
input = tokenizer.encode("This is the benchmark of gpt2-xl.", return_tensors="pt").to("cuda")
total = 0
cnt = 0
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
for i in range(TIMES):
start.record()
o = model(input)
end.record()
torch.cuda.synchronize()
if not i:
continue
total += start.elapsed_time(end)/1000
cnt += 1
print("Pytorch version --- cnt: {}, avg_time_cost: {}s".format(cnt, total/cnt))
# tensorflow
gpus = tf.config.experimental.list_logical_devices('GPU')
gpu = gpus[0].name
with tf.device(gpu):
model = TFAutoModelWithLMHead.from_pretrained("./gpt2-xl")
input = tokenizer.encode("This is the benchmark of gpt2-xl.", return_tensors="tf")
total = 0
cnt = 0
with tf.device(gpu):
for i in range(TIMES):
start = time.time()
o = model(input)
end = time.time()
if not i:
continue
total += (end-start)
cnt += 1
print("Tensorflow version --- cnt: {}, avg_time_cost: {}s".format(cnt, total/cnt))
```
**Output:**
```
Pytorch version --- cnt: 99, avg_time_cost: 0.05521844493981564s
Tensorflow version --- cnt: 99, avg_time_cost: 0.2912752628326416s
```
**The utilization of gpu:**
- PyTorch 33%
- Tensorflow 8%
I'm new to Transformers. Did i use it in a wrong way, or any mistakes in my code?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5494/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5493 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5493/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5493/comments | https://api.github.com/repos/huggingface/transformers/issues/5493/events | https://github.com/huggingface/transformers/pull/5493 | 650,451,946 | MDExOlB1bGxSZXF1ZXN0NDQzOTM5OTYw | 5,493 | Create README.md | {
"login": "savasy",
"id": 6584825,
"node_id": "MDQ6VXNlcjY1ODQ4MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6584825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/savasy",
"html_url": "https://github.com/savasy",
"followers_url": "https://api.github.com/users/savasy/followers",
"following_url": "https://api.github.com/users/savasy/following{/other_user}",
"gists_url": "https://api.github.com/users/savasy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/savasy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/savasy/subscriptions",
"organizations_url": "https://api.github.com/users/savasy/orgs",
"repos_url": "https://api.github.com/users/savasy/repos",
"events_url": "https://api.github.com/users/savasy/events{/privacy}",
"received_events_url": "https://api.github.com/users/savasy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5493?src=pr&el=h1) Report\n> Merging [#5493](https://codecov.io/gh/huggingface/transformers/pull/5493?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/21cd8c40862ba356096ab4cda31563ee3a35c1bb&el=desc) will **increase** coverage by `1.82%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5493?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5493 +/- ##\n==========================================\n+ Coverage 76.39% 78.22% +1.82% \n==========================================\n Files 141 141 \n Lines 24617 24617 \n==========================================\n+ Hits 18807 19257 +450 \n+ Misses 5810 5360 -450 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5493?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.47% <0.00%> (-49.57%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (+2.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.21% <0.00%> (+8.92%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <0.00%> (+17.80%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5493/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5493?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5493?src=pr&el=footer). Last update [21cd8c4...5e82926](https://codecov.io/gh/huggingface/transformers/pull/5493?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5493/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5493",
"html_url": "https://github.com/huggingface/transformers/pull/5493",
"diff_url": "https://github.com/huggingface/transformers/pull/5493.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5493.patch",
"merged_at": 1594118590000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5492 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5492/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5492/comments | https://api.github.com/repos/huggingface/transformers/issues/5492/events | https://github.com/huggingface/transformers/pull/5492 | 650,451,004 | MDExOlB1bGxSZXF1ZXN0NDQzOTM5MTgy | 5,492 | Update README.md | {
"login": "savasy",
"id": 6584825,
"node_id": "MDQ6VXNlcjY1ODQ4MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6584825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/savasy",
"html_url": "https://github.com/savasy",
"followers_url": "https://api.github.com/users/savasy/followers",
"following_url": "https://api.github.com/users/savasy/following{/other_user}",
"gists_url": "https://api.github.com/users/savasy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/savasy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/savasy/subscriptions",
"organizations_url": "https://api.github.com/users/savasy/orgs",
"repos_url": "https://api.github.com/users/savasy/repos",
"events_url": "https://api.github.com/users/savasy/events{/privacy}",
"received_events_url": "https://api.github.com/users/savasy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5492?src=pr&el=h1) Report\n> Merging [#5492](https://codecov.io/gh/huggingface/transformers/pull/5492?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/21cd8c40862ba356096ab4cda31563ee3a35c1bb&el=desc) will **increase** coverage by `1.82%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5492?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5492 +/- ##\n==========================================\n+ Coverage 76.39% 78.22% +1.82% \n==========================================\n Files 141 141 \n Lines 24617 24617 \n==========================================\n+ Hits 18807 19257 +450 \n+ Misses 5810 5360 -450 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5492?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.47% <0.00%> (-49.57%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (+2.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.21% <0.00%> (+8.92%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <0.00%> (+17.80%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5492/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5492?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5492?src=pr&el=footer). Last update [21cd8c4...31db433](https://codecov.io/gh/huggingface/transformers/pull/5492?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | I set the language | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5492/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5492",
"html_url": "https://github.com/huggingface/transformers/pull/5492",
"diff_url": "https://github.com/huggingface/transformers/pull/5492.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5492.patch",
"merged_at": 1594118615000
} |
https://api.github.com/repos/huggingface/transformers/issues/5491 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5491/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5491/comments | https://api.github.com/repos/huggingface/transformers/issues/5491/events | https://github.com/huggingface/transformers/pull/5491 | 650,450,337 | MDExOlB1bGxSZXF1ZXN0NDQzOTM4NjE1 | 5,491 | Update README.md | {
"login": "savasy",
"id": 6584825,
"node_id": "MDQ6VXNlcjY1ODQ4MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6584825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/savasy",
"html_url": "https://github.com/savasy",
"followers_url": "https://api.github.com/users/savasy/followers",
"following_url": "https://api.github.com/users/savasy/following{/other_user}",
"gists_url": "https://api.github.com/users/savasy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/savasy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/savasy/subscriptions",
"organizations_url": "https://api.github.com/users/savasy/orgs",
"repos_url": "https://api.github.com/users/savasy/repos",
"events_url": "https://api.github.com/users/savasy/events{/privacy}",
"received_events_url": "https://api.github.com/users/savasy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5491?src=pr&el=h1) Report\n> Merging [#5491](https://codecov.io/gh/huggingface/transformers/pull/5491?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/21cd8c40862ba356096ab4cda31563ee3a35c1bb&el=desc) will **increase** coverage by `1.82%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5491?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5491 +/- ##\n==========================================\n+ Coverage 76.39% 78.22% +1.82% \n==========================================\n Files 141 141 \n Lines 24617 24617 \n==========================================\n+ Hits 18807 19257 +450 \n+ Misses 5810 5360 -450 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5491?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.47% <0.00%> (-49.57%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (+2.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.21% <0.00%> (+8.92%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <0.00%> (+17.80%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5491/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5491?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5491?src=pr&el=footer). Last update [21cd8c4...cf45716](https://codecov.io/gh/huggingface/transformers/pull/5491?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5491/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5491",
"html_url": "https://github.com/huggingface/transformers/pull/5491",
"diff_url": "https://github.com/huggingface/transformers/pull/5491.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5491.patch",
"merged_at": 1594118629000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5490 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5490/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5490/comments | https://api.github.com/repos/huggingface/transformers/issues/5490/events | https://github.com/huggingface/transformers/issues/5490 | 650,405,706 | MDU6SXNzdWU2NTA0MDU3MDY= | 5,490 | [ERROR] Tokenizer and TokenizerFast ??? | {
"login": "1512262",
"id": 28997653,
"node_id": "MDQ6VXNlcjI4OTk3NjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/28997653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1512262",
"html_url": "https://github.com/1512262",
"followers_url": "https://api.github.com/users/1512262/followers",
"following_url": "https://api.github.com/users/1512262/following{/other_user}",
"gists_url": "https://api.github.com/users/1512262/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1512262/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1512262/subscriptions",
"organizations_url": "https://api.github.com/users/1512262/orgs",
"repos_url": "https://api.github.com/users/1512262/repos",
"events_url": "https://api.github.com/users/1512262/events{/privacy}",
"received_events_url": "https://api.github.com/users/1512262/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
}
] | [
"This is related to https://github.com/huggingface/transformers/issues/2917. In the slow tokenizers, when `do_lower_case=False` we don't strip accents, while we do it when `do_lower_case=True`. In the fast tokenizers, this is controlled by the `strip_accents` option, which is `True` here.\r\n\r\n@thomwolf How do you think we should fix this?",
"Yes let's do it @n1t0 and stick to the official bert tokenizer behavior in the fast tokenizers as well."
] | 1,593 | 1,594 | 1,594 | NONE | null | # ๐ Bug
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): 'bert-base-multilingual-cased'
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. `from transformers import *`
2. `tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-cased')`
3. `tokenizer.decode(tokenizer.encode('mแป bร i lแบกc trรดi'))` --> wrong
but:
1. `from transformers import *`
2. `tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')`
3. `tokenizer.decode(tokenizer.encode('mแป bร i lแบกc trรดi'))` --> true
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
the decode sentence after encoding and decoding using TokenizerFast should be true
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Pytorch and TF
- Python version: 3.6
- PyTorch version (GPU?): GPU
- Tensorflow version (GPU?): 2.2
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5490/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5489 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5489/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5489/comments | https://api.github.com/repos/huggingface/transformers/issues/5489/events | https://github.com/huggingface/transformers/issues/5489 | 650,381,330 | MDU6SXNzdWU2NTAzODEzMzA= | 5,489 | encoder_outputs are always the same when generating with different inputs | {
"login": "bobshih",
"id": 15016623,
"node_id": "MDQ6VXNlcjE1MDE2NjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/15016623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bobshih",
"html_url": "https://github.com/bobshih",
"followers_url": "https://api.github.com/users/bobshih/followers",
"following_url": "https://api.github.com/users/bobshih/following{/other_user}",
"gists_url": "https://api.github.com/users/bobshih/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bobshih/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bobshih/subscriptions",
"organizations_url": "https://api.github.com/users/bobshih/orgs",
"repos_url": "https://api.github.com/users/bobshih/repos",
"events_url": "https://api.github.com/users/bobshih/events{/privacy}",
"received_events_url": "https://api.github.com/users/bobshih/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hmm, this will be hard to debug here. I'm currently working on getting a working example of a Bert2Bert model, so I will keep an eye on `encoder_output` bugs!\r\nSee conversation here: https://github.com/huggingface/transformers/issues/4443#issuecomment-656691026",
"Thank you for your reply.\r\nI am looking forward your Bert2Bert example. And I hope we can solve this problem.",
"Hey @bobshih, \r\n\r\nTraining a Bert2Bert model worked out fine for me - I did not experience any bugs related to `encoder_outputs`. \r\nYou can check out the model and all the code to reproduce the results here:\r\nhttps://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16\r\n\r\nMaybe you can take a look, adapt your code and see whether the error persists :-) ",
"OK, thank for your attention.\r\nI will adapt my code after finishing my work at hand.\r\n",
"Hi, @patrickvonplaten,\r\nI have trained EncoderDecoderModel with your training example script.\r\nI noticed that if there are too many padding tokens in training data, it will make the trained model produce the same vectors despite the different inputs.\r\nbut I wonder why attention mask does not work?\r\nIn my original training setting, there are 93% padding tokens. After I reduce the max length and make padding tokens decrease to 21%, the encoderdecoder model works without problems.",
"This line:\r\n\r\nhttps://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#training-script:\r\n\r\n```python\r\n batch[\"labels\"] = [\r\n [-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch[\"labels\"]\r\n ]\r\n```\r\n\r\nin the preprocessing should make sure that the PAD token does not influence the loss and thus also not the model.",
"> This line:\r\n> \r\n> https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#training-script:\r\n> \r\n> ```python\r\n> batch[\"labels\"] = [\r\n> [-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch[\"labels\"]\r\n> ]\r\n> ```\r\n> \r\n> in the preprocessing should make sure that the PAD token does not influence the loss and thus also not the model.\r\n\r\nYes, I understand what you mention, and I also use this setting for models after adapting my script, but the problem shows again.\r\nI will train the model again with this setting in the weekend. And I hope there will be a different result.\r\nAgain, thank you very much for solving the problem and patience."
] | 1,593 | 1,595 | 1,595 | NONE | null | # โ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
HI,
I've trained a bert2bert model to generate answers with different questions.
But after training, the bert2bert model always produces the same encoder_outputs with different inputs.
Does anyone know how to fix or avoid the problem?
If I dont resize the bert's embedding size, will this solve the problem?
Thanks in advance.
## The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
## The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## Environment info
- `transformers` version: 2.11.0
- Platform: linux
- Python version: 3.7 64bit
- PyTorch version (GPU?): 1.5.0
- Tensorflow version (GPU?): No
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Using parallel setting only
Below is my training code.
The inputs are turned to indices by tokenizer.encode_plus
```
import logging
import os
import sys
import inspect
import json
import argparse
from dataclasses import dataclass, fields
from tqdm.auto import tqdm, trange
import torch
from torch.utils.data import DataLoader
from transformers import (
EncoderDecoderModel,
AdamW,
get_linear_schedule_with_warmup,
BertTokenizer,
PreTrainedModel
)
# import utils
logger = logging.getLogger(__name__)
@dataclass
class training_args:
weight_decay: float = 0.0
learning_rate: float = 5e-5
adam_epsilon: float = 1e-8
warmup_steps: int = 0
gradient_accumulation_steps: int = 1
# num_train_epochs: 10
max_grad_norm: float = 1.0
early_stop: float = 1e-5
stop_barrier: float = 1e-5
def set_args():
parser = argparse.ArgumentParser()
parser.add_argument("--vocab_file", default='vocab_trad_clean.txt')
# parser.add_argument("--encoder_config", default='Configs/encoder.json')
# parser.add_argument("--decoder_config", default='Configs/decoder.json')
parser.add_argument("--data_folder", required=True)
# parser.add_argument("--output_folder", required=True)
# parser.add_argument("--from_pretrained", action='store_true')
parser.add_argument("--logging_steps", default=1000, type=int)
parser.add_argument("--save_total_limit", default=5, type=int)
parser.add_argument("--save_steps", default=10000, type=int)
parser.add_argument("--batch_size", default=20, type=int)
parser.add_argument("--num_train_epochs", default=30, type=int)
args = parser.parse_args()
return args
class Generator_Data(Dataset):
def __init__(self, data):
super(Generator_Data, self).__init__()
self.inputs = []
self.outputs = []
for example in data:
self.inputs.append(example['source'])
self.outputs.append(example['target'])
def __len__(self):
return len(self.inputs)
def __getitem__(self, index):
return self.inputs[index], self.outputs[index]
def collate_fn(batch):
input_dict = {
"input_ids": [],
"decoder_input_ids": [],
"labels": [],
}
for data in batch:
input_data = data[0]
output_data = data[1]
input_dict["input_ids"].append(input_data["input_ids"])
input_dict["decoder_input_ids"].append(output_data["input_ids"])
input_dict["labels"].append(output_data["input_ids"])
input_dict = {k: torch.LongTensor(v) for k, v in input_dict.items()}
return input_dict
def Get_DataLoader(data_file, batch_size, training=False):
if not os.path.isfile(data_file):
raise Exception(f"data file [{data_file}] doesn\'t exist in util, LoadDataset")
logger.info(f"start loading data from {data_file}")
data = torch.load(data_file)
dataset = Generator_Data(data)
logger.info("turn dataset into dataloader")
if training:
loader = DataLoader(dataset, batch_size, shuffle=True, collate_fn=collate_fn)
else:
loader = DataLoader(dataset, batch_size, shuffle=False, collate_fn=collate_fn)
return loader
if __name__ == "__main__":
args = set_args()
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese', vocab_file=args.vocab_file)
tokenizer.add_tokens('[NewLine]')
tokenizer.add_tokens('[space]')
args.output_folder = 'Seq2Seq_Transformers/Model/test'
os.makedirs(args.output_folder, exist_ok=True)
tokenizer.save_pretrained(args.output_folder)
model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-chinese", "bert-base-chinese")
model.encoder.resize_token_embeddings(len(tokenizer))
model.decoder.resize_token_embeddings(len(tokenizer))
model.config.encoder.vocab_size = len(tokenizer)
model.config.decoder.vocab_size = len(tokenizer)
if torch.cuda.is_available():
args.device = torch.device("cuda")
args.n_gpu = torch.cuda.device_count()
else:
args.device = torch.device("cpu")
args.n_gpu = 0
model.to(args.device)
if args.n_gpu > 1:
model = torch.nn.DataParallel(model)
# loading the data
train_pt_file = os.path.join(args.data_folder, 'train.pt')
valid_pt_file = os.path.join(args.data_folder, 'valid.pt')
train_dataloader = Get_DataLoader(train_pt_file, batch_size=args.batch_size, training=True)
valid_dataloader = Get_DataLoader(valid_pt_file, batch_size=args.batch_size)
# Prepare optimizer and schedule (linear warmup and decay)
t_total = int(len(train_dataloader) // training_args.gradient_accumulation_steps * args.num_train_epochs)
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": training_args.weight_decay
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
optimizer = AdamW(optimizer_grouped_parameters, lr=training_args.learning_rate, eps=training_args.adam_epsilon)
scheduler = get_linear_schedule_with_warmup(
optimizer, num_warmup_steps=training_args.warmup_steps, num_training_steps=t_total
)
# start training
logger.info("***************************")
for field in fields(training_args):
logger.info(f"{field.name}: {getattr(training_args, field.name)}")
logger.info("***************************")
global_step = 0
tr_loss = 0.0
logging_loss = 0.0
loss_scalar = 1000000
previous_loss_scaler = -1
model.train()
model.zero_grad()
for epoch in tqdm(range(args.num_train_epochs), desc="Epoch", ascii=True):
epoch_iterator = tqdm(train_dataloader, desc="Iteration", ascii=True)
for step, inputs in enumerate(epoch_iterator):
model.train()
for k, v in inputs.items():
inputs[k] = v.to(args.device)
outputs = model(**inputs)
# loss, outputs = model(input_ids=inputs["input_ids"], decoder_input_ids=inputs["input_ids"], lm_labels=inputs["input_ids"])[:2]
loss = outputs[0] # model outputs are always tuple in transformers (see doc)
if args.n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu parallel training
if training_args.gradient_accumulation_steps > 1:
loss = loss / training_args.gradient_accumulation_steps
loss.backward()
tr_loss += loss.item()
if (step + 1) % training_args.gradient_accumulation_steps == 0 or (
# last step in epoch but step is always smaller than gradient_accumulation_steps
len(epoch_iterator) <= training_args.gradient_accumulation_steps
and (step + 1) == len(epoch_iterator)
):
torch.nn.utils.clip_grad_norm_(model.parameters(), training_args.max_grad_norm)
optimizer.step()
scheduler.step()
model.zero_grad()
global_step += 1
if args.logging_steps > 0 and global_step % args.logging_steps == 0:
logs = {}
loss_scalar = (tr_loss - logging_loss) / args.logging_steps
learning_rate_scalar = scheduler.get_last_lr()[0]
logs["learning_rate"] = learning_rate_scalar
logs["loss"] = loss_scalar
logs["loss_difference"] = abs(loss_scalar-previous_loss_scaler)
previous_loss_scaler = loss_scalar
logging_loss = tr_loss
epoch_iterator.write(json.dumps({**logs, **{"step": global_step}}))
if loss_scalar < training_args.early_stop:# or logs["loss_difference"] < training_args.stop_barrier:
break
if args.save_steps > 0 and global_step % args.save_steps == 0:
# Save model checkpoint
output_dir = os.path.join(args.output_folder, f"checkpoint-{global_step}")
os.makedirs(output_dir, exist_ok=True)
logger.info("Saving model checkpoint to %s", output_dir)
# Save a trained model and configuration using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
if isinstance(model, torch.nn.DataParallel):
model = model.module
if not isinstance(model, PreTrainedModel):
raise ValueError("Trainer.model appears to not be a PreTrainedModel")
model.save_pretrained(output_dir)
torch.save(optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt"))
torch.save(scheduler.state_dict(), os.path.join(output_dir, "scheduler.pt"))
logger.info("Saving optimizer and scheduler states to %s", output_dir)
if loss_scalar < training_args.early_stop:
break
output_dir = args.output_folder
os.makedirs(output_dir, exist_ok=True)
logger.info("Saving model checkpoint to %s", output_dir)
# Save a trained model and configuration using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
if isinstance(model, torch.nn.DataParallel):
model = model.module
if not isinstance(model, PreTrainedModel):
raise ValueError("Trainer.model appears to not be a PreTrainedModel")
model.save_pretrained(output_dir)
```
Besides, for each time step, encoder_outputs are the same, like the picture below. I think it's very strange.
I am not sure if they are the same problems.

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5489/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5488 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5488/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5488/comments | https://api.github.com/repos/huggingface/transformers/issues/5488/events | https://github.com/huggingface/transformers/issues/5488 | 650,368,481 | MDU6SXNzdWU2NTAzNjg0ODE= | 5,488 | Cannot train RoBERTa from scratch with multiple nodes and multiple GPUs | {
"login": "chiyuzhang94",
"id": 33407613,
"node_id": "MDQ6VXNlcjMzNDA3NjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/33407613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiyuzhang94",
"html_url": "https://github.com/chiyuzhang94",
"followers_url": "https://api.github.com/users/chiyuzhang94/followers",
"following_url": "https://api.github.com/users/chiyuzhang94/following{/other_user}",
"gists_url": "https://api.github.com/users/chiyuzhang94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chiyuzhang94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chiyuzhang94/subscriptions",
"organizations_url": "https://api.github.com/users/chiyuzhang94/orgs",
"repos_url": "https://api.github.com/users/chiyuzhang94/repos",
"events_url": "https://api.github.com/users/chiyuzhang94/events{/privacy}",
"received_events_url": "https://api.github.com/users/chiyuzhang94/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I figured out the issue. I should not set `--local_rank=$SLURM_LOCALID` in the argument. `torch.distributed.launch` will automatically pass the right --local_rank value to run_language_modeling.py. "
] | 1,593 | 1,594 | 1,594 | NONE | null | # ๐ Bug
## Information
Model I am using (Bert, XLNet ...):
Language I am using the model on (English, Chinese ...):
The problem arises when using:
- [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
- [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Use the `transformers/examples/language-modeling/run_language_modeling.py` to train RoBERT model from scratch.
2. I am using the Slurm tool to submit a job that requires multiple nodes and multiple GPUs. In this case, it requests 2 nodes and 2 GPUs for each. The shell scripts:
```
#!/bin/bash
#SBATCH --time=3:00:00
#SBATCH --ntasks=2
#SBATCH --nodes=2
#SBATCH --cpus-per-task=2
#SBATCH --gres=gpu:2
#SBATCH --mem=64G
#SBATCH --job-name=pre-train
#SBATCH --output=pre-train.out
#SBATCH --account=XXXXXX
module load gcc
module load cuda cudnn
module load openmpi nccl
source ~/roberta/bin/activate
export NCCL_DEBUG=INFO
export NPROC_PER_NODE=2
export HDF5_USE_FILE_LOCKING='FALSE'
export PARENT=`/bin/hostname -s`
export MPORT=13000
export CHILDREN=`scontrol show hostnames $SLURM_JOB_NODELIST | grep -v $PARENT`
export HOSTLIST="$PARENT $CHILDREN"
echo $HOSTLIST
export WORLD_SIZE=$SLURM_NTASKS
srun distributed_runner.sh
```
3. `distributed_runner.sh` script is:
```
#!/bin/bash
/bin/hostname -s
source ~/roberta/bin/activate
python3 -m torch.distributed.launch \
--nproc_per_node=$NPROC_PER_NODE \
--nnodes=$SLURM_JOB_NUM_NODES \
--node_rank=$SLURM_PROCID \
--master_addr="$PARENT" --master_port="$MPORT" \
run_language_modeling.py \
--gradient_accumulation_steps=16 \
--train_data_file="./data/sample.txt" \
--output_dir="./sample_model/" \
--model_type=roberta \
--mlm \
--local_rank=$SLURM_LOCALID \
--config_name="./sample_config" \
--tokenizer_name="./sample_config" \
--do_train \
--line_by_line \
--learning_rate=1e-4 \
--num_train_epochs=40 \
--save_total_limit=5 \
--save_steps=20 \
--per_gpu_train_batch_size=16 \
--seed=42
```
4. `sample.txt` is a dummy training set with 20K lines where each line is a text string. Data is already pre-processed following the official instruction. `sample_config/` includes all the pre-processed vocabulary and config files: config.json, merges.txt, tokenizer_config.json, and vocab.json.
5. The job can be successfully launched on two nodes, with no error. But **each node only uses one GPU** based on the GPU usage with `nvidia-smi` command.
**Node 1**
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:3B:00.0 Off | 0 |
| N/A 40C P0 70W / 300W | 21175MiB / 32510MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... On | 00000000:86:00.0 Off | 0 |
| N/A 35C P0 42W / 300W | 11MiB / 32510MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 190822 C /home/py3.6/bin/python3 13003MiB |
| 0 190823 C /home/py3.6/bin/python3 8161MiB |
+-----------------------------------------------------------------------------+
```
**Node 2**
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:18:00.0 Off | 0 |
| N/A 43C P0 77W / 300W | 17127MiB / 32510MiB | 97% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... On | 00000000:3B:00.0 Off | 0 |
| N/A 33C P0 40W / 300W | 11MiB / 32510MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 209232 C /home/py3.6/bin/python3 6899MiB |
| 0 209233 C /home/py3.6/bin/python3 10217MiB |
+-----------------------------------------------------------------------------+
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Expect each node can use two GPUs. Namely, the Processes should show the usage of GPU 0 and 1 for both nodes.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.1
- Platform: CentOS Linux 7 (Core)
- Python version: Python 3.6.3
- PyTorch version (GPU?): torch==1.5.1
- Tensorflow version (GPU?): tensorflow-gpu==2.1.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5488/reactions",
"total_count": 6,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/5488/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5487 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5487/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5487/comments | https://api.github.com/repos/huggingface/transformers/issues/5487/events | https://github.com/huggingface/transformers/issues/5487 | 650,360,907 | MDU6SXNzdWU2NTAzNjA5MDc= | 5,487 | Better TPU Support in examples | {
"login": "YuxianMeng",
"id": 11677047,
"node_id": "MDQ6VXNlcjExNjc3MDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/11677047?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YuxianMeng",
"html_url": "https://github.com/YuxianMeng",
"followers_url": "https://api.github.com/users/YuxianMeng/followers",
"following_url": "https://api.github.com/users/YuxianMeng/following{/other_user}",
"gists_url": "https://api.github.com/users/YuxianMeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YuxianMeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YuxianMeng/subscriptions",
"organizations_url": "https://api.github.com/users/YuxianMeng/orgs",
"repos_url": "https://api.github.com/users/YuxianMeng/repos",
"events_url": "https://api.github.com/users/YuxianMeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/YuxianMeng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I agree with this request. TPU training pipeline is very fragile, lacking and needs more attention. Encouraging more use of TPU by providing easy examples would increase its usage resulting in a higher quality TPU Training system over time as more people contribute.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | # ๐ Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
I tried to train BERT on TPU recently, and found your [examples](https://github.com/huggingface/transformers/blob/master/examples) has done work on this topic. However, some code looks experimental and not perfectly ready to be used. The combination of [xla and pytorch-lightening](https://github.com/huggingface/transformers/blob/master/examples/lightning_base.py) looks great, but seems not be used neither in any training scripts nor in documents. I'd like to know when will those codes be ready, and I'd be very glad to contribute some code.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5487/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5486 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5486/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5486/comments | https://api.github.com/repos/huggingface/transformers/issues/5486/events | https://github.com/huggingface/transformers/issues/5486 | 650,347,856 | MDU6SXNzdWU2NTAzNDc4NTY= | 5,486 | Tokenizers throwing warning "The current process just got forked, Disabling parallelism to avoid deadlocks.. To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)" | {
"login": "saahiluppal",
"id": 47444392,
"node_id": "MDQ6VXNlcjQ3NDQ0Mzky",
"avatar_url": "https://avatars.githubusercontent.com/u/47444392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saahiluppal",
"html_url": "https://github.com/saahiluppal",
"followers_url": "https://api.github.com/users/saahiluppal/followers",
"following_url": "https://api.github.com/users/saahiluppal/following{/other_user}",
"gists_url": "https://api.github.com/users/saahiluppal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saahiluppal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saahiluppal/subscriptions",
"organizations_url": "https://api.github.com/users/saahiluppal/orgs",
"repos_url": "https://api.github.com/users/saahiluppal/repos",
"events_url": "https://api.github.com/users/saahiluppal/events{/privacy}",
"received_events_url": "https://api.github.com/users/saahiluppal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
}
] | [
"This might help you: https://stackoverflow.com/questions/62691279/how-to-disable-tokenizers-parallelism-true-false-warning",
"I suspect this may be caused by loading data. In my case, it happens when my dataloader starts working.",
"This is happening whenever you use `multiprocessing` (Often used by data loaders). The way to disable this warning is to set the `TOKENIZERS_PARALLELISM` environment variable to the value that makes more sense for you. By default, we disable the parallelism to avoid any hidden deadlock that would be hard to debug, but you might be totally fine while keeping it enabled in your specific use-case.\r\n\r\nYou can try to set it to `true`, and if your process seems to be stuck, doing nothing, then you should use `false`.\r\n\r\nWe'll improve this message to help avoid any confusion (Cf https://github.com/huggingface/tokenizers/issues/328)",
"I may be a rookie, but it seems like it would be useful to indicate that this is an environment variable in the warning message.",
"You are totally right! In the latest version `3.0.2`, the warning message should be a lot better, and it will trigger only when necessary.",
"Hi, sorry to bump this thread... \r\n\r\nI'm having the same problem however, the tokenizer is used only in my model. \r\n\r\nData loading is made with multiple workers but it is only loading raw text which is then given to the model and only the model uses the tokenizer. \r\nI don't have multi model or whatever, just a classic pytorch model. \r\n\r\nThus I was wondering how can I have the warning. \r\n\r\nThanks in advance, \r\nHave a great day :) ",
"You must be using a tokenizer before using `multiprocessing`. When your process gets forked, you see this message because it detects that a fork is happening and that some kind of parallelism was used before.",
"@n1t0, \r\nThanks a lot for the fast reply, \r\nI guess it detect a fork even if it's safe for me to do so... Yes my process is forked but not the tokenizer. \r\n\r\nThen I will use the env variable to remove the warning. ",
"I use ```tokenizer``` in my data loader.\r\n\r\nIf that is the source of this problem (hence disabling the parallelization --> hence slow training), then what is the solution? \r\n\r\nUsing ```tokenizer``` in the pre-processing step? ",
"After testing, it is found that when the data in a dataloader is processed by the token, and the datalodaer jumps out before it is finished, this warning will be triggered; \r\nI give a code example:\r\n```\r\n# for example, following code will trigger the warning\r\nfor texts in train_dataloader:\r\n _ = tokenizer.batch_encode_plus(texts)\r\n # loader has not been traversed\r\n # but texts are used\r\n break \r\nfor texts in test_dataloader:\r\n # warning ...\r\n pass or break\r\n\r\n# and following code will not trigger the warning\r\nfor texts in train_dataloader:\r\n # loader has not been traversed\r\n # but texts are not used\r\n break \r\nfor texts in test_dataloader:\r\n # No warning \r\n pass or break\r\n```",
"@hbchen121 my dataloader processes the text in init function\r\n\r\nDuring data loading time, directly input_ids and attention masks are fetched, yet I get this warning.",
"Despite [the documentation](https://huggingface.co/transformers/v3.0.2/model_doc/auto.html) saying that `use_fast` defaults to `False`, adding `use_fast=False` so that it's `AutoTokenizer.from_pretrained(model_name, use_fast=False)` removed this warning for me. If I just use `AutoTokenizer.from_pretrained(model_name)`, the warning pops up again.",
"I want to know if we can ignore this warning. What bad effects will it have? Will it affect the training results? Or is it just a little slower? If the environment variables are changed according to the above solution, what is the cost of doing so?",
"cc @ArthurZucker",
"> I want to know if we can ignore this warning. What bad effects will it have? Will it affect the training results? Or is it just a little slower? If the environment variables are changed according to the above solution, what is the cost of doing so?\r\n\r\n@hzphzp there is an explanation in SO\r\nhttps://stackoverflow.com/questions/62691279/how-to-disable-tokenizers-parallelism-true-false-warning/72926996#72926996",
"> > I want to know if we can ignore this warning. What bad effects will it have? Will it affect the training results? Or is it just a little slower? If the environment variables are changed according to the above solution, what is the cost of doing so?\r\n> \r\n> @hzphzp there is an explanation in SO https://stackoverflow.com/questions/62691279/how-to-disable-tokenizers-parallelism-true-false-warning/72926996#72926996\r\n\r\n Thank you!",
"Though each notebook runs fine by itself, I get this warning when running multiple notebooks via `nbdev_test` (https://github.com/fastai/nbdev). Shortly afterwards it crashes due to out-of-memory. \r\n\r\nI assume it has something to do with multiprocessing in `nbdev_test`, even when setting `--n_workers 1`.\r\n\r\nThis gets a warning about disabling parallelism to avoid locks:\r\n> ```nbdev_test --n_workers 1 --pause 10 --do_print --file_glob \"*nb\"```\r\n\r\nThis works fine:\r\n> ```$ for x in `ls nbs/*nb`; do nbdev_test --n_workers 1 --do_print --path \"$x\"; done```"
] | 1,593 | 1,705 | 1,594 | NONE | null | I know this warning is because the transformer library is updated to 3.x.
I know the warning saying to set TOKENIZERS_PARALLELISM = true / false
My question is where should i set TOKENIZERS_PARALLELISM = true / false
is this when defining tokenizers like
```
tok = Tokenizer.from_pretrained('xyz', TOKENIZERS_PARALLELISM=True) // this doesn't work
```
or is this when encoding text like
```
tok.encode_plus(text_string, some=some, some=some, TOKENIZERS_PARALLELISM = True) // this also didn't work
```
Suggestions anyone? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5486/reactions",
"total_count": 43,
"+1": 32,
"-1": 1,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 10
} | https://api.github.com/repos/huggingface/transformers/issues/5486/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5485 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5485/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5485/comments | https://api.github.com/repos/huggingface/transformers/issues/5485/events | https://github.com/huggingface/transformers/issues/5485 | 650,333,781 | MDU6SXNzdWU2NTAzMzM3ODE= | 5,485 | Bert-extractive-summaizer importing issue | {
"login": "Vinu-4590",
"id": 65990730,
"node_id": "MDQ6VXNlcjY1OTkwNzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/65990730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vinu-4590",
"html_url": "https://github.com/Vinu-4590",
"followers_url": "https://api.github.com/users/Vinu-4590/followers",
"following_url": "https://api.github.com/users/Vinu-4590/following{/other_user}",
"gists_url": "https://api.github.com/users/Vinu-4590/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vinu-4590/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vinu-4590/subscriptions",
"organizations_url": "https://api.github.com/users/Vinu-4590/orgs",
"repos_url": "https://api.github.com/users/Vinu-4590/repos",
"events_url": "https://api.github.com/users/Vinu-4590/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vinu-4590/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | Hi ,
I am facing issues while import summarizer.
NameError Traceback (most recent call last)
<ipython-input-6-3b4384c20fe2> in <module>()
----> 1 from summarizer import Summarizer
2
3 body = 'Text body that you want to summarize with BERT'
4 body2 = 'Something else you want to summarize with BERT'
5 model = Summarizer()
~/anaconda3/envs/amazonei_tensorflow_p36/lib/python3.6/site-packages/summarizer/__init__.py in <module>()
----> 1 from summarizer.model_processors import Summarizer, SingleModel, TransformerSummarizer
~/anaconda3/envs/amazonei_tensorflow_p36/lib/python3.6/site-packages/summarizer/model_processors.py in <module>()
----> 1 from summarizer.bert_parent import BertParent
2 from summarizer.cluster_features import ClusterFeatures
3 from summarizer.sentence_handler import SentenceHandler
4 from typing import List
5 from abc import abstractmethod
~/anaconda3/envs/amazonei_tensorflow_p36/lib/python3.6/site-packages/summarizer/bert_parent.py in <module>()
9
10
---> 11 class BertParent(object):
12
13 """
~/anaconda3/envs/amazonei_tensorflow_p36/lib/python3.6/site-packages/summarizer/bert_parent.py in BertParent()
16
17 MODELS = {
---> 18 'bert-base-uncased': (BertModel, BertTokenizer),
19 'bert-large-uncased': (BertModel, BertTokenizer),
20 'xlnet-base-cased': (XLNetModel, XLNetTokenizer),
**NameError: name 'BertModel' is not defined**
Please help me on this issue | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5485/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5484 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5484/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5484/comments | https://api.github.com/repos/huggingface/transformers/issues/5484/events | https://github.com/huggingface/transformers/issues/5484 | 650,331,874 | MDU6SXNzdWU2NTAzMzE4NzQ= | 5,484 | Error using t5-base-cnn | {
"login": "manojpreveen",
"id": 64023526,
"node_id": "MDQ6VXNlcjY0MDIzNTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/64023526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manojpreveen",
"html_url": "https://github.com/manojpreveen",
"followers_url": "https://api.github.com/users/manojpreveen/followers",
"following_url": "https://api.github.com/users/manojpreveen/following{/other_user}",
"gists_url": "https://api.github.com/users/manojpreveen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manojpreveen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manojpreveen/subscriptions",
"organizations_url": "https://api.github.com/users/manojpreveen/orgs",
"repos_url": "https://api.github.com/users/manojpreveen/repos",
"events_url": "https://api.github.com/users/manojpreveen/events{/privacy}",
"received_events_url": "https://api.github.com/users/manojpreveen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Use the standard `T5Tokenizer.from_pretrained('t5-base')`",
"and I would love to hear your results!",
"The outputs of t5-base-cnn are good! Will let you know when I run over a bigger dataset.\r\nMy doubt is how many epochs is ideal for fine-tuning t5 models over cnn/dm?\r\nMy t5-small fine-tuned over cnn/dm code(number of epochs ran : 1) produces okayish results but not that great. Even some sentences in the output didn't get completed at the end and looked as if it's cut.",
"I don't know how many epochs to train t5-small for, but our new `finetune.py` tracks the validation rouge 2 score. Usually when that stops increasing, the model will not further improve.\r\nAlso, since epochs are so long, there is a `--val_check_interval` argument that you can use to check this statistic more frequently than the default, every epoch.",
"Thank you, Sam. You solve my problem too. Benefit a lot from your work.",
"Happy to help. Closing this for now, feel free to open a new issue if you run into more problems!",
"Hello everyone!\r\n\r\nI used direct T5Tokenizer.from_pretrained('t5-base')... however, I got the following error:\r\n\r\nOSError: Model name 't5-base' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 't5-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\r\n\r\nHow can I resolve this issue? or from where can I download the model manually?\r\n\r\nThank you in advance\r\n",
"I tried to generate results using ```sshleifer/t5-base-cnn``` and changed the tokenizer to ```tokenizer = T5Tokenizer.from_pretrained('t5-base')``` and failed. \r\n\r\nMy code:\r\n```\r\npython run_eval.py sshleifer/t5-base-cnn $DATA_DIR/test.source $OUTPUT_FILE \\\r\n --reference_path $DATA_DIR/test.target \\\r\n --task summarization \\\r\n --device cuda \\\r\n --fp16 \\\r\n --bs 32 \\\r\n```\r\n\r\nFollowing is the error message:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"run_eval.py\", line 127, in <module>\r\n run_generate()\r\n File \"run_eval.py\", line 112, in run_generate\r\n checkpoint_path=args.checkpoint_path,\r\n File \"run_eval.py\", line 63, in generate_summaries_or_translations\r\n **gen_kwargs,\r\n File \"/home/rachelzheng/www-joy/venv/lib/python3.6/site-packages/torch/autograd/grad_mode.py\", line 49, in decorate_no_grad\r\n return func(*args, **kwargs)\r\n File \"/home/rachelzheng/www-joy/venv/lib/python3.6/site-packages/transformers/generation_utils.py\", line 480, in generate\r\n model_kwargs=model_kwargs,\r\n File \"/home/rachelzheng/www-joy/venv/lib/python3.6/site-packages/transformers/generation_utils.py\", line 795, in _generate_beam_search\r\n input_ids = torch.cat([input_ids, beam_tokens.unsqueeze(1)], dim=-1)\r\nRuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THC/generic/THCTensorMath.cu:196\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[32,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[33,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[34,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[35,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[36,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[37,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[38,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[39,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[40,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[41,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[42,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[43,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[44,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[45,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[46,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[47,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[48,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[49,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[50,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[51,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[52,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[53,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[54,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[55,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[56,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[57,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[58,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[59,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[60,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[61,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[62,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[63,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[64,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[65,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[66,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[67,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[68,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[69,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[70,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[71,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[72,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[73,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[74,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[75,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[76,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[77,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[78,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[79,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[80,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[81,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[82,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[83,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[84,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[85,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[86,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[87,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[88,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[89,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[90,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[91,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[92,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[93,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[94,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[95,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[96,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[97,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[98,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[99,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[100,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[101,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[102,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[103,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[104,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[105,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[106,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[107,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[108,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[109,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[110,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[111,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[112,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[113,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[114,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[115,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[116,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[117,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[118,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[119,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[120,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[121,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[122,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[123,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[124,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[125,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[126,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[127,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[0,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[1,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[2,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[3,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[4,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[5,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[6,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[7,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[8,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[9,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[10,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[11,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[12,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[13,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[14,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[15,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[16,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[17,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[18,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[19,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[20,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[21,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[22,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[23,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[24,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[25,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[26,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[27,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[28,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[29,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[30,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:60: lambda [](int)->auto::operator()(int)->auto: block: [0,0,0], thread:[31,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n```",
"Okay it looks like ```fp16``` leads to the problem. Remove ```--fp16``` solves my problem."
] | 1,593 | 1,601 | 1,595 | NONE | null | I'm trying to fine-tune all the t5 models over CNN/DailyMail to see how they perform compared to the BART ones. I came across your t5-base-cnn model day. I tried using it in the way mentioned but got interrupted by an error that says :
OSError: Model name 'sshleifer/t5-base-cnn' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 'sshleifer/t5-base-cnn' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.
Earlier when I fine-tuned t5-small over CNN, in the output file there was this spiece.model but it's not present in your listed files.
Any suggestions how to get over this so that I can use your t5-base-cnn instead of fine-tuning all-over myself?
Thanks.
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5484/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5483 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5483/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5483/comments | https://api.github.com/repos/huggingface/transformers/issues/5483/events | https://github.com/huggingface/transformers/issues/5483 | 650,282,132 | MDU6SXNzdWU2NTAyODIxMzI= | 5,483 | can't get models directory after running python run_squad.py | {
"login": "guhuawuli",
"id": 23067203,
"node_id": "MDQ6VXNlcjIzMDY3MjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/23067203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guhuawuli",
"html_url": "https://github.com/guhuawuli",
"followers_url": "https://api.github.com/users/guhuawuli/followers",
"following_url": "https://api.github.com/users/guhuawuli/following{/other_user}",
"gists_url": "https://api.github.com/users/guhuawuli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guhuawuli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guhuawuli/subscriptions",
"organizations_url": "https://api.github.com/users/guhuawuli/orgs",
"repos_url": "https://api.github.com/users/guhuawuli/repos",
"events_url": "https://api.github.com/users/guhuawuli/events{/privacy}",
"received_events_url": "https://api.github.com/users/guhuawuli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | # ๐ Bug
## Information
Model I am using (Bert, XLNet ...):
Bert
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
after running "python run_squad.py ", I didn't get models directory.
running time of my code in colab is only 20 minutes, I think the training process is not done, what's the problem? How to solve that?
The tasks I am working on is:
SQUaD
## To reproduce
Steps to reproduce the behavior:
1.https://qa.fastforwardlabs.com/pytorch/hugging%20face/wikipedia/bert/transformers/2020/05/19/Getting_Started_with_QA.html
2.click the "Open in Colab" button above
!python run_squad.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--output_dir models/bert/ \
--data_dir data/squad \
--overwrite_output_dir \
--overwrite_cache \
--do_train \
--train_file train-v2.0.json \
--version_2_with_negative \
--do_lower_case \
--do_eval \
--predict_file dev-v2.0.json \
--per_gpu_train_batch_size 2 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--threads 10 \
--save_steps 5000
## Expected behavior
## Environment info
colab GPU
- `transformers` version:
- Platform:colab
- Python version:3.6.9
- PyTorch version (GPU?):1.5.1+cu101
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5483/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5482 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5482/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5482/comments | https://api.github.com/repos/huggingface/transformers/issues/5482/events | https://github.com/huggingface/transformers/issues/5482 | 650,276,568 | MDU6SXNzdWU2NTAyNzY1Njg= | 5,482 | Can't pickle tokenizers ... | {
"login": "ohmeow",
"id": 14000,
"node_id": "MDQ6VXNlcjE0MDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/14000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ohmeow",
"html_url": "https://github.com/ohmeow",
"followers_url": "https://api.github.com/users/ohmeow/followers",
"following_url": "https://api.github.com/users/ohmeow/following{/other_user}",
"gists_url": "https://api.github.com/users/ohmeow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ohmeow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ohmeow/subscriptions",
"organizations_url": "https://api.github.com/users/ohmeow/orgs",
"repos_url": "https://api.github.com/users/ohmeow/repos",
"events_url": "https://api.github.com/users/ohmeow/events{/privacy}",
"received_events_url": "https://api.github.com/users/ohmeow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi, ok I can reproduce, this is a bug in the `AddedToken`class of `huggingface/tokenizers`. Moving this up."
] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | Tokenizers are losing all their special tokens when un-pickled. Not sure if there are other attributes that aren't being rehydrated as well ...
```
hf_tokenizer
# <transformers.tokenization_roberta.RobertaTokenizer at 0x7fc4ce625f10>
hf_tokenizer.special_tokens_map
# {'bos_token': '<s>',
# 'eos_token': '</s>',
# 'unk_token': '<unk>',
# 'sep_token': '</s>',
# 'pad_token': '<pad>',
# 'cls_token': '<s>',
# 'mask_token': '<mask>'}
pickle.dump( hf_tokenizer, open( "save.p", "wb" ) )
hf_tokenizer = pickle.load( open( "save.p", "rb" ) )
hf_tokenizer.special_tokens_map
# {'bos_token': '',
# 'eos_token': '',
# 'unk_token': '',
# 'sep_token': '',
# 'pad_token': '',
# 'cls_token': '',
# 'mask_token': ''}
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5482/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5481 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5481/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5481/comments | https://api.github.com/repos/huggingface/transformers/issues/5481/events | https://github.com/huggingface/transformers/pull/5481 | 650,266,555 | MDExOlB1bGxSZXF1ZXN0NDQzNzkzNDQx | 5,481 | Merge pull request #1 from huggingface/master | {
"login": "Clement25",
"id": 35480362,
"node_id": "MDQ6VXNlcjM1NDgwMzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/35480362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Clement25",
"html_url": "https://github.com/Clement25",
"followers_url": "https://api.github.com/users/Clement25/followers",
"following_url": "https://api.github.com/users/Clement25/following{/other_user}",
"gists_url": "https://api.github.com/users/Clement25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Clement25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Clement25/subscriptions",
"organizations_url": "https://api.github.com/users/Clement25/orgs",
"repos_url": "https://api.github.com/users/Clement25/repos",
"events_url": "https://api.github.com/users/Clement25/events{/privacy}",
"received_events_url": "https://api.github.com/users/Clement25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | NONE | null | Version track | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5481/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5481",
"html_url": "https://github.com/huggingface/transformers/pull/5481",
"diff_url": "https://github.com/huggingface/transformers/pull/5481.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5481.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5480 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5480/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5480/comments | https://api.github.com/repos/huggingface/transformers/issues/5480/events | https://github.com/huggingface/transformers/issues/5480 | 650,251,847 | MDU6SXNzdWU2NTAyNTE4NDc= | 5,480 | 'Size' Error while loading t5-large model | {
"login": "monk1337",
"id": 17107749,
"node_id": "MDQ6VXNlcjE3MTA3NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monk1337",
"html_url": "https://github.com/monk1337",
"followers_url": "https://api.github.com/users/monk1337/followers",
"following_url": "https://api.github.com/users/monk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monk1337/subscriptions",
"organizations_url": "https://api.github.com/users/monk1337/orgs",
"repos_url": "https://api.github.com/users/monk1337/repos",
"events_url": "https://api.github.com/users/monk1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/monk1337/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! This is because the return of `tokenizer('sentence embeddings from t5 model', return_tensors=\"pt\")` is a dict containing values that can be used by the model. It's not a tensor, so it's not `input_ids`.\r\n\r\nChange the following line:\r\n```py\r\ntokenizer('sentence embeddings from t5 model', return_tensors=\"pt\")\r\n```\r\nto\r\n```py\r\ntokenizer('sentence embeddings from t5 model', return_tensors=\"pt\")[\"input_ids\"]\r\n```\r\nto make it work.\r\n\r\nYou can check the documentation [here](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.__call__)."
] | 1,593 | 1,593 | 1,593 | NONE | null | # ๐ Bug
## Information
Model I am using t5-large:
Language I am using the model on English
The problem arises when using:
```
from transformers import T5Tokenizer, T5Model
tokenizer = T5Tokenizer.from_pretrained('t5-large')
model = T5Model.from_pretrained('t5-large')
input_ids = tokenizer('sentence embeddings from t5 model', return_tensors="pt")
outputs = model(input_ids=input_ids, decoder_input_ids=input_ids)
```
Error message :
```
KeyError: 'size'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
<ipython-input-2-0ba21adf6547> in <module>
1 input_ids = tokenizer('chyuu wow this is working', return_tensors="pt")
----> 2 outputs = model(input_ids=input_ids, decoder_input_ids=input_ids)
~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~/tfproject/tfenv/lib/python3.7/site-packages/transformers/modeling_t5.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, decoder_past_key_value_states, use_cache, inputs_embeds, decoder_inputs_embeds, head_mask, output_attentions, output_hidden_states)
949 head_mask=head_mask,
950 output_attentions=output_attentions,
--> 951 output_hidden_states=output_hidden_states,
952 )
953
~/tfproject/tfenv/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~/tfproject/tfenv/lib/python3.7/site-packages/transformers/modeling_t5.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, past_key_value_states, use_cache, output_attentions, output_hidden_states)
676 raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
677 elif input_ids is not None:
--> 678 input_shape = input_ids.size()
679 input_ids = input_ids.view(-1, input_shape[-1])
680 elif inputs_embeds is not None:
~/tfproject/tfenv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in __getattr__(self, item)
185 return self.data[item]
186 except KeyError:
--> 187 raise AttributeError
188
189 def __getstate__(self):
AttributeError:
```
The tasks I am working on is:
Getting sentence representation
## To reproduce
## Environment info
- `transformers` version:
- Platform: Ubuntu 18.04
- Python version: python3.7
- PyTorch version (GPU?): 1.5.1
- Tensorflow version (GPU?): 1.14
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5480/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5479 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5479/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5479/comments | https://api.github.com/repos/huggingface/transformers/issues/5479/events | https://github.com/huggingface/transformers/pull/5479 | 650,224,604 | MDExOlB1bGxSZXF1ZXN0NDQzNzU5MTEy | 5,479 | Exposing prepare_for_model for both slow & fast tokenizers | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5479?src=pr&el=h1) Report\n> Merging [#5479](https://codecov.io/gh/huggingface/transformers/pull/5479?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ef0e9d806c51059b07b98cb0279a20d3ba3cbc1d&el=desc) will **increase** coverage by `0.43%`.\n> The diff coverage is `89.53%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5479?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5479 +/- ##\n==========================================\n+ Coverage 77.36% 77.80% +0.43% \n==========================================\n Files 141 141 \n Lines 24617 24632 +15 \n==========================================\n+ Hits 19045 19164 +119 \n+ Misses 5572 5468 -104 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5479?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <89.28%> (-0.62%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.16% <100.00%> (+0.94%)` | :arrow_up: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.40% <0.00%> (+0.71%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.23% <0.00%> (+2.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+4.10%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `55.79% <0.00%> (+27.58%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5479/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5479?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5479?src=pr&el=footer). Last update [ef0e9d8...ecdc965](https://codecov.io/gh/huggingface/transformers/pull/5479?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | MEMBER | null | With version v3.0.0, two breaking changes that could have been avoided have been introduced. After discussion with @n1to and @thomwolf, this PR aims to revert these changes, by implementing two changes:
- The `prepare_for_model` method for both slow and tokenizers is now publicly exposed (it was only the case for the slow tokenizers before v3.0.0)
- The truncation methods now default to `longest_first` instead of `first_only` .
This PR adds two tests, for both Python and Rust tokenizers:
- Assert that `tokenizer.prepare_for_model(tokenizer.encode(x)) == tokenizer.encode_plus(x)`
- Assert that the output of `prepare_for_model` for rust and python tokenizers is equal.
closes https://github.com/huggingface/transformers/issues/5377
closes https://github.com/huggingface/transformers/issues/5447
closes https://github.com/huggingface/transformers/issues/5460 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5479/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5479",
"html_url": "https://github.com/huggingface/transformers/pull/5479",
"diff_url": "https://github.com/huggingface/transformers/pull/5479.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5479.patch",
"merged_at": 1593787881000
} |
https://api.github.com/repos/huggingface/transformers/issues/5478 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5478/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5478/comments | https://api.github.com/repos/huggingface/transformers/issues/5478/events | https://github.com/huggingface/transformers/issues/5478 | 650,223,803 | MDU6SXNzdWU2NTAyMjM4MDM= | 5,478 | Possible breaking undetected change to "data/processors/squad.py" | {
"login": "Santosh-Gupta",
"id": 5524261,
"node_id": "MDQ6VXNlcjU1MjQyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Santosh-Gupta",
"html_url": "https://github.com/Santosh-Gupta",
"followers_url": "https://api.github.com/users/Santosh-Gupta/followers",
"following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions",
"organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs",
"repos_url": "https://api.github.com/users/Santosh-Gupta/repos",
"events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, overflowing tokens are handled differently between slow and fast tokenizers with fast tokenizers having better support.\r\nWhich kind of tokenizer are you using?",
"I am using the fast tokenizers ",
"Ok, currently we don't handle overflowing in fast tokenizers with this processing script.\r\nThis is on the short term roadmap though."
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | # โ Questions & Help
It looks like the squad data utils hasn't been updated to the new version.
https://github.com/huggingface/transformers/blob/fcf0652460753f8a81f7576e8abdaa6b3742f00e/src/transformers/data/processors/squad.py#L136
```
encoded_dict = tokenizer.encode_plus( # TODO(thom) update this logic
truncated_query if tokenizer.padding_side == "right" else span_doc_tokens,
span_doc_tokens if tokenizer.padding_side == "right" else truncated_query,
truncation="only_second" if tokenizer.padding_side == "right" else "only_first",
padding="max_length",
max_length=max_seq_length,
return_overflowing_tokens=True,
stride=max_seq_length - doc_stride - len(truncated_query) - sequence_pair_added_tokens,
return_token_type_ids=True,
)
```
In squad utils, `encoded_dict` used to have a 'overflowing_tokens' key, but in the new version, all the overflowing tokens are returned as list of lists. But it looks like the data processor doesn't take this into account
```
if "overflowing_tokens" not in encoded_dict or (
"overflowing_tokens" in encoded_dict and len(encoded_dict["overflowing_tokens"]) == 0
):
break
span_doc_tokens = encoded_dict["overflowing_tokens"]
```
So this logic would interpret that there are no overflowing tokens since that key doesn't exist.
The code I am working on is using version 2.11.0, doesn't have 'overflowing_tokens' either, but it doesn't have the list of lists like in version 3.0.0. It has something called 'overflow_to_sample_mapping', which returns only zeros, which I am not sure how to use.
The 2.11.0 documentation doesn't mention this return value. (https://huggingface.co/transformers/v2.11.0/main_classes/tokenizer.html) .
But it looks the data processor wouldn't give overflowing tokens or notice an issue for 2.11.0 as well. And it doesn't seem to have a method to get strided overflowing tokens. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5478/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5477 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5477/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5477/comments | https://api.github.com/repos/huggingface/transformers/issues/5477/events | https://github.com/huggingface/transformers/pull/5477 | 650,217,259 | MDExOlB1bGxSZXF1ZXN0NDQzNzUzMDMz | 5,477 | Add DeeBERT (entropy-based early exiting for *BERT) | {
"login": "ji-xin",
"id": 20148770,
"node_id": "MDQ6VXNlcjIwMTQ4Nzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/20148770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ji-xin",
"html_url": "https://github.com/ji-xin",
"followers_url": "https://api.github.com/users/ji-xin/followers",
"following_url": "https://api.github.com/users/ji-xin/following{/other_user}",
"gists_url": "https://api.github.com/users/ji-xin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ji-xin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ji-xin/subscriptions",
"organizations_url": "https://api.github.com/users/ji-xin/orgs",
"repos_url": "https://api.github.com/users/ji-xin/repos",
"events_url": "https://api.github.com/users/ji-xin/events{/privacy}",
"received_events_url": "https://api.github.com/users/ji-xin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5477?src=pr&el=h1) Report\n> Merging [#5477](https://codecov.io/gh/huggingface/transformers/pull/5477?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **increase** coverage by `0.41%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5477?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5477 +/- ##\n==========================================\n+ Coverage 77.83% 78.25% +0.41% \n==========================================\n Files 141 141 \n Lines 24634 24634 \n==========================================\n+ Hits 19175 19278 +103 \n+ Misses 5459 5356 -103 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5477?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.47% <0.00%> (-49.57%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+1.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5477/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5477?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5477?src=pr&el=footer). Last update [58cca47...f44de41](https://codecov.io/gh/huggingface/transformers/pull/5477?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Btw: would be awesome so see a token classification example ๐
",
"Hi @JetRunner, thanks for the review! I have updated according to your suggestions.",
"2 checks fail, however they don't seem relevant to my commits.",
"@LysandreJik Thanks for the comments and I've updated accordingly!"
] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | Add DeeBERT (entropy-based early exiting for *BERT).
Paper: https://www.aclweb.org/anthology/2020.acl-main.204/
Based on its original repository: https://github.com/castorini/DeeBERT | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5477/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5477/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5477",
"html_url": "https://github.com/huggingface/transformers/pull/5477",
"diff_url": "https://github.com/huggingface/transformers/pull/5477.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5477.patch",
"merged_at": 1594167479000
} |
https://api.github.com/repos/huggingface/transformers/issues/5476 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5476/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5476/comments | https://api.github.com/repos/huggingface/transformers/issues/5476/events | https://github.com/huggingface/transformers/issues/5476 | 650,181,077 | MDU6SXNzdWU2NTAxODEwNzc= | 5,476 | Seq2Seq: Option to not store whole dataset in memory | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Do you still need help? I can help out and contribute here. ",
"Yes that would be super helpful! The goal is to avoid using so much CPU memory here:\r\nhttps://github.com/huggingface/transformers/blob/353b8f1e7a7361c0afd9e391381bc226b4a5ca8f/examples/seq2seq/utils.py#L101\r\n\r\nby only reading a few batches from disk at a time instead of all at once. Will probably require saving in a different format.\r\nMaybe something like this:\r\nhttps://github.com/pytorch/fairseq/blob/f0a61a2774aff2efbc1adb0b5daee346a8401605/fairseq/data/data_utils.py#L55\r\n\r\nLet me know if you need more info!",
"Great! \r\nMy idea is to lazily read & encode just the required line numbers from the file when `__getitem__` is called with an index. For this we could create a map of {example number: line numbers} to read. Let me know what you think.",
"Sounds reasonable. What does fairseq do?",
"Your approach sounds good, feel free to get started.\r\n\r\nAnother approach would be not pad inputs when they are getting cached and then make a batch at load time. ",
"I think fairseq had the data in multiple files instead of one big one. Sounds good - I am working on it. Will share when I have a tested version. ",
"Couple of questions : I plan to get rid of the self.source variable; \r\n1. can I get rid of this property?https://github.com/huggingface/transformers/blob/353b8f1e7a7361c0afd9e391381bc226b4a5ca8f/examples/seq2seq/utils.py#L146-L148\r\n\r\n2. Any ideas on how to use the sampler without the full dataset? In general shuffling and sampling may be limited with lazy datasets: although you should be able to use a random sampler in your loader.\r\nhttps://github.com/huggingface/transformers/blob/353b8f1e7a7361c0afd9e391381bc226b4a5ca8f/examples/seq2seq/utils.py#L155",
"@sshleifer I have a working solution that works for the rest of the training loop and passes the tests. See [here](https://github.com/huggingface/transformers/compare/master...Pradhy729:lazy_loading_seq2seq)\r\nJust need input on my points above. Let me know.",
"1) You can get rid of `src_lens` and `tgt_lens`, they are unused afaict\r\n\r\n2) I would suggest trying to store the len of each example, (tokenized or untokenized, but not padded), and passing that to `SortishSampler` instead of `self.source`, and then changing\r\n`def key(self, i): len(self.data[i])` -> `def key(self, i): self.data[i]`.\r\n\r\nhttps://github.com/huggingface/transformers/blob/7e86d070c0bbed949b5c922f914f0fec44af72d4/examples/seq2seq/utils.py#L203.\r\n\r\n",
"Like https://github.com/huggingface/transformers/pull/5818",
"Got it. Good idea!"
] | 1,593 | 1,595 | 1,595 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/blob/ef0e9d806c51059b07b98cb0279a20d3ba3cbc1d/examples/seq2seq/utils.py#L93 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5476/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5475 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5475/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5475/comments | https://api.github.com/repos/huggingface/transformers/issues/5475/events | https://github.com/huggingface/transformers/issues/5475 | 650,174,876 | MDU6SXNzdWU2NTAxNzQ4NzY= | 5,475 | 35 Model Hub entries fail AutoConfig | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"142 AutoTokenizer Failures (the original 35 +107 more).\r\nWhat would help with the `sshleifer` ones (at least) is if I could somehow say \"this is the same as the `BartTokenizer` without uploading the same files all over again. Sadly, S3 does not support symlinks.\r\n\r\n```\r\n{'DeBERTa/base': ('Unrecognized model in DeBERTa/base. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'DeBERTa/large': ('Unrecognized model in DeBERTa/large. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'Huntersx/cola_model': (\"Model name 'Huntersx/cola_model' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'Huntersx/cola_model' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'Itcast/cnc_output': ('Unrecognized model in Itcast/cnc_output. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'JerryQu/v2 distilgpt2': (\"Model name 'JerryQu/v2 distilgpt2' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2). We assumed 'JerryQu/v2 distilgpt2' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'Narsil/fr_pretrained': ('Unrecognized model in Narsil/fr_pretrained. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'Narsil/pretrained': ('Unrecognized model in Narsil/pretrained. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'Narsil/pretrained2': ('Unrecognized model in Narsil/pretrained2. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'PubChimps/dl-bert': (\"Model name 'PubChimps/dl-bert' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'PubChimps/dl-bert' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'Tereveni-AI/gpt2-124M-uk-fiction': ('expected str, bytes or os.PathLike object, not NoneType',),\r\n 'WikinewsSum/bart-large-multi-combine-wiki-news': ('expected str, bytes or os.PathLike object, not NoneType',),\r\n 'WikinewsSum/bert2bert-multi-de-wiki-news': (\"Unrecognized configuration class <class 'transformers.configuration_encoder_decoder.EncoderDecoderConfig'> to build an AutoTokenizer.\\nModel type should be one of RetriBertConfig, T5Config, MobileBertConfig, DistilBertConfig, AlbertConfig, CamembertConfig, MBartConfig, XLMRobertaConfig, MarianConfig, BartConfig, LongformerConfig, RobertaConfig, ReformerConfig, ElectraConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig.\",),\r\n 'WikinewsSum/bert2bert-multi-en-wiki-news': (\"Unrecognized configuration class <class 'transformers.configuration_encoder_decoder.EncoderDecoderConfig'> to build an AutoTokenizer.\\nModel type should be one of RetriBertConfig, T5Config, MobileBertConfig, DistilBertConfig, AlbertConfig, CamembertConfig, MBartConfig, XLMRobertaConfig, MarianConfig, BartConfig, LongformerConfig, RobertaConfig, ReformerConfig, ElectraConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig.\",),\r\n 'WikinewsSum/bert2bert-multi-fr-wiki-news': (\"Unrecognized configuration class <class 'transformers.configuration_encoder_decoder.EncoderDecoderConfig'> to build an AutoTokenizer.\\nModel type should be one of RetriBertConfig, T5Config, MobileBertConfig, DistilBertConfig, AlbertConfig, CamembertConfig, MBartConfig, XLMRobertaConfig, MarianConfig, BartConfig, LongformerConfig, RobertaConfig, ReformerConfig, ElectraConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig.\",),\r\n 'abryee/TigXLNet': ('`d_head` (64) should be equal to `d_model // n_head` (48)',),\r\n 'adamlin/ClinicalBert_all_notes': ('Unrecognized model in adamlin/ClinicalBert_all_notes. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'adamlin/ClinicalBert_disch': ('Unrecognized model in adamlin/ClinicalBert_disch. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'adamlin/NCBI_BERT_pubmed_mimic_uncased_base_transformers': ('Unrecognized model in adamlin/NCBI_BERT_pubmed_mimic_uncased_base_transformers. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers': ('Unrecognized model in adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'ahotrod/roberta_large_squad2': ('expected str, bytes or os.PathLike object, not NoneType',),\r\n 'aicast/bert_finetuning_test': ('stat: path should be string, bytes, os.PathLike or integer, not NoneType',),\r\n 'airKlizz/bart-large-multi-combine-wiki-news': ('expected str, bytes or os.PathLike object, not NoneType',),\r\n 'airKlizz/bert2bert-multi-de-wiki-news': (\"Unrecognized configuration class <class 'transformers.configuration_encoder_decoder.EncoderDecoderConfig'> to build an AutoTokenizer.\\nModel type should be one of RetriBertConfig, T5Config, MobileBertConfig, DistilBertConfig, AlbertConfig, CamembertConfig, MBartConfig, XLMRobertaConfig, MarianConfig, BartConfig, LongformerConfig, RobertaConfig, ReformerConfig, ElectraConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig.\",),\r\n 'airKlizz/bert2bert-multi-en-wiki-news': (\"Unrecognized configuration class <class 'transformers.configuration_encoder_decoder.EncoderDecoderConfig'> to build an AutoTokenizer.\\nModel type should be one of RetriBertConfig, T5Config, MobileBertConfig, DistilBertConfig, AlbertConfig, CamembertConfig, MBartConfig, XLMRobertaConfig, MarianConfig, BartConfig, LongformerConfig, RobertaConfig, ReformerConfig, ElectraConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig.\",),\r\n 'airKlizz/bert2bert-multi-fr-wiki-news': (\"Unrecognized configuration class <class 'transformers.configuration_encoder_decoder.EncoderDecoderConfig'> to build an AutoTokenizer.\\nModel type should be one of RetriBertConfig, T5Config, MobileBertConfig, DistilBertConfig, AlbertConfig, CamembertConfig, MBartConfig, XLMRobertaConfig, MarianConfig, BartConfig, LongformerConfig, RobertaConfig, ReformerConfig, ElectraConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig.\",),\r\n 'allegro/herbert-klej-cased-v1': (\"Model name 'allegro/herbert-klej-cased-v1' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'allegro/herbert-klej-cased-v1' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'beyhan/checkpoint-3750': (\"Model name 'beyhan/checkpoint-3750' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'beyhan/checkpoint-3750' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'camembert/camembert-base': (\"Model name 'camembert/camembert-base' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'camembert/camembert-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'castorini/monot5-base-msmarco': (\"Model name 'castorini/monot5-base-msmarco' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 'castorini/monot5-base-msmarco' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'chrisliu298/arxiv_ai_gpt2': ('expected str, bytes or os.PathLike object, not NoneType',),\r\n 'clue/albert_chinese_small': (\"Model name 'clue/albert_chinese_small' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'clue/albert_chinese_small' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'clue/albert_chinese_tiny': (\"Model name 'clue/albert_chinese_tiny' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'clue/albert_chinese_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'clue/roberta_chinese_3L312_clue_tiny': (\"Model name 'clue/roberta_chinese_3L312_clue_tiny' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_3L312_clue_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'clue/roberta_chinese_3L768_clue_tiny': (\"Model name 'clue/roberta_chinese_3L768_clue_tiny' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_3L768_clue_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'clue/roberta_chinese_base': (\"Model name 'clue/roberta_chinese_base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'clue/roberta_chinese_clue_large': (\"Model name 'clue/roberta_chinese_clue_large' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_clue_large' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'clue/roberta_chinese_clue_tiny': (\"Model name 'clue/roberta_chinese_clue_tiny' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_clue_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'clue/roberta_chinese_large': (\"Model name 'clue/roberta_chinese_large' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_large' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'clue/roberta_chinese_pair_large': (\"Model name 'clue/roberta_chinese_pair_large' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_pair_large' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'clue/roberta_chinese_pair_tiny': (\"Model name 'clue/roberta_chinese_pair_tiny' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'clue/roberta_chinese_pair_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'codegram/calbert-base-uncased': (\"Model name 'codegram/calbert-base-uncased' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'codegram/calbert-base-uncased' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'codegram/calbert-tiny-uncased': (\"Model name 'codegram/calbert-tiny-uncased' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'codegram/calbert-tiny-uncased' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'damien-ir/discriminator': (\"Model name 'damien-ir/discriminator' was not found in tokenizers model name list (google/electra-small-generator, google/electra-base-generator, google/electra-large-generator, google/electra-small-discriminator, google/electra-base-discriminator, google/electra-large-discriminator). We assumed 'damien-ir/discriminator' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'dccuchile/cased': ('Unrecognized model in dccuchile/cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'dccuchile/uncased': ('Unrecognized model in dccuchile/uncased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'denpa92/bert-base-cantonese': (\"Model name 'denpa92/bert-base-cantonese' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'denpa92/bert-base-cantonese' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'djstrong/bg_cs_pl_ru_cased_L-12_H-768_A-12': ('Unrecognized model in djstrong/bg_cs_pl_ru_cased_L-12_H-768_A-12. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'dslim23/bert-base-cased-NER-conll-2003': (\"Model name 'dslim23/bert-base-cased-NER-conll-2003' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'dslim23/bert-base-cased-NER-conll-2003' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'elgeish/cs224n-squad2.0-distilbert-base-uncased': ('stat: path should be string, bytes, os.PathLike or integer, not NoneType',),\r\n 'elgeish/cs224n-squad2.0-roberta-base': ('expected str, bytes or os.PathLike object, not NoneType',),\r\n 'facebook/dpr-ctx_encoder-single-nq-base': ('dpr',),\r\n 'facebook/dpr-question_encoder-single-nq-base': ('dpr',),\r\n 'facebook/dpr-reader-single-nq-base': ('dpr',),\r\n 'gaochangkuan/model_dir': ('expected str, bytes or os.PathLike object, not NoneType',),\r\n 'google/reformer-enwik8': (\"Model name 'google/reformer-enwik8' was not found in tokenizers model name list (google/reformer-crime-and-punishment). We assumed 'google/reformer-enwik8' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'healx/gpt-2-pubmed-large': ('Unrecognized model in healx/gpt-2-pubmed-large. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'healx/gpt-2-pubmed-medium': ('Unrecognized model in healx/gpt-2-pubmed-medium. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'hfl/chinese-roberta-wwm-ext-large': ('expected str, bytes or os.PathLike object, not NoneType',),\r\n 'hfl/chinese-roberta-wwm-ext': ('expected str, bytes or os.PathLike object, not NoneType',),\r\n 'hfl/rbt3': ('Unrecognized model in hfl/rbt3. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'hfl/rbtl3': ('Unrecognized model in hfl/rbtl3. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'huseinzol05/bert-base-bahasa-cased': ('stat: path should be string, bytes, os.PathLike or integer, not NoneType',),\r\n 'huseinzol05/tiny-bert-bahasa-cased': ('stat: path should be string, bytes, os.PathLike or integer, not NoneType',),\r\n 'lhoestq/distilbert-base-uncased-finetuned-absa-as': (\"Model name 'lhoestq/distilbert-base-uncased-finetuned-absa-as' was not found in tokenizers model name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-cased, distilbert-base-cased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). We assumed 'lhoestq/distilbert-base-uncased-finetuned-absa-as' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'lonePatient/albert_chinese_small': (\"Model name 'lonePatient/albert_chinese_small' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'lonePatient/albert_chinese_small' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'lonePatient/roberta_chinese_clue_tiny': (\"Model name 'lonePatient/roberta_chinese_clue_tiny' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'lonePatient/roberta_chinese_clue_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'm-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alberto': (\"Model name 'm-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alberto' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'm-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alberto' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'microsoft/Multilingual-MiniLM-L12-H384': ('stat: path should be string, bytes, os.PathLike or integer, not NoneType',),\r\n 'microsoft/unilm-base-cased': ('Unrecognized model in microsoft/unilm-base-cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'microsoft/unilm-large-cased': ('Unrecognized model in microsoft/unilm-large-cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'moumeneb1/bert-base-multilingual-cased-ecology_crisis': (\"Model name 'moumeneb1/bert-base-multilingual-cased-ecology_crisis' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'moumeneb1/bert-base-multilingual-cased-ecology_crisis' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'moumeneb1/flaubert-base-cased-ecology_crisis': (\"Model name 'moumeneb1/flaubert-base-cased-ecology_crisis' was not found in tokenizers model name list (flaubert/flaubert_small_cased, flaubert/flaubert_base_uncased, flaubert/flaubert_base_cased, flaubert/flaubert_large_cased). We assumed 'moumeneb1/flaubert-base-cased-ecology_crisis' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'mrm8488/bert-uncased-finetuned-qnli': (\"Model name 'mrm8488/bert-uncased-finetuned-qnli' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'mrm8488/bert-uncased-finetuned-qnli' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'mrm8488/prunebert-base-uncased-finepruned-topK-squadv2': ('masked_bert',),\r\n 'mrm8488/prunebert-multi-uncased-finepruned-l0-reg-tydiqa-for-xqa': ('masked_bert',),\r\n 'mrm8488/prunebert-multi-uncased-finepruned-magnitude-tydiqa-for-xqa': ('masked_bert',),\r\n 'mrm8488/prunebert-multi-uncased-finepruned-soft-movement-tydiqa-for-xqa': ('masked_bert',),\r\n 'mrm8488/prunebert-multi-uncased-finepruned-topK-tydiqa-for-xqa': ('masked_bert',),\r\n 'mrm8488/prunebert-multi-uncased-finepruned-tydiqa-for-xqa': ('masked_bert',),\r\n 'mrm8488/roberta-large-finetuned-wsc': ('expected str, bytes or os.PathLike object, not NoneType',),\r\n 'mrm8488/spanbert-base-finetuned-squadv1': (\"Model name 'mrm8488/spanbert-base-finetuned-squadv1' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'mrm8488/spanbert-base-finetuned-squadv1' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'mrm8488/spanbert-base-finetuned-squadv2': (\"Model name 'mrm8488/spanbert-base-finetuned-squadv2' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'mrm8488/spanbert-base-finetuned-squadv2' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'mrm8488/spanbert-large-finetuned-squadv1': (\"Model name 'mrm8488/spanbert-large-finetuned-squadv1' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'mrm8488/spanbert-large-finetuned-squadv1' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'mrm8488/spanbert-large-finetuned-squadv2': (\"Model name 'mrm8488/spanbert-large-finetuned-squadv2' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'mrm8488/spanbert-large-finetuned-squadv2' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'oda/music5': ('Unrecognized model in oda/music5. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'patrickvonplaten/reformer-random': (\"Model name 'patrickvonplaten/reformer-random' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'patrickvonplaten/reformer-random' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'patrickvonplaten/reformer-tiny-random': (\"Model name 'patrickvonplaten/reformer-tiny-random' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'patrickvonplaten/reformer-tiny-random' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'pertschuk/0_RoBERTa': ('Unrecognized model in pertschuk/0_RoBERTa. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'pertschuk/albert-base-squad-classifier-ms': (\"Model name 'pertschuk/albert-base-squad-classifier-ms' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'pertschuk/albert-base-squad-classifier-ms' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'pertschuk/albert-base-squad-classifier': (\"Model name 'pertschuk/albert-base-squad-classifier' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'pertschuk/albert-base-squad-classifier' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'pertschuk/albert-intent-model-v3': (\"Model name 'pertschuk/albert-intent-model-v3' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'pertschuk/albert-intent-model-v3' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'radha1258/save': ('Unrecognized model in radha1258/save. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'ramsrigouthamg/t5_boolean_questions': (\"Model name 'ramsrigouthamg/t5_boolean_questions' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 'ramsrigouthamg/t5_boolean_questions' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'ramsrigouthamg/t5_paraphraser': (\"Model name 'ramsrigouthamg/t5_paraphraser' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 'ramsrigouthamg/t5_paraphraser' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'ramsrigouthamg/t5_squad': (\"Model name 'ramsrigouthamg/t5_squad' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 'ramsrigouthamg/t5_squad' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'ran/c10': (\"Model name 'ran/c10' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'ran/c10' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'ran/c9': (\"Model name 'ran/c9' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'ran/c9' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'ran/h1': (\"Model name 'ran/h1' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'ran/h1' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'ran/y7': (\"Model name 'ran/y7' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'ran/y7' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization': (\"Model name 'remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'remi/bertabs-finetuned-extractive-abstractive-summarization': (\"Model name 'remi/bertabs-finetuned-extractive-abstractive-summarization' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'remi/bertabs-finetuned-extractive-abstractive-summarization' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'remi/bertabs-finetuned-xsum-extractive-abstractive-summarization': (\"Model name 'remi/bertabs-finetuned-xsum-extractive-abstractive-summarization' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'remi/bertabs-finetuned-xsum-extractive-abstractive-summarization' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'savasy/checkpoint-1250': (\"Model name 'savasy/checkpoint-1250' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'savasy/checkpoint-1250' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'savasy/checkpoint-1875': (\"Model name 'savasy/checkpoint-1875' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'savasy/checkpoint-1875' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'savasy/model': (\"Model name 'savasy/model' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'savasy/model' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'schmidek/electra-small-cased': (\"Model name 'schmidek/electra-small-cased' was not found in tokenizers model name list (google/electra-small-generator, google/electra-base-generator, google/electra-large-generator, google/electra-small-discriminator, google/electra-base-discriminator, google/electra-large-discriminator). We assumed 'schmidek/electra-small-cased' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'shauryr/arqmath-roberta-base-1.5M': (\"Model name 'shauryr/arqmath-roberta-base-1.5M' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'shauryr/arqmath-roberta-base-1.5M' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'shauryr/arqmath-roberta-base-2M': (\"Model name 'shauryr/arqmath-roberta-base-2M' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'shauryr/arqmath-roberta-base-2M' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'shauryr/arqmath-roberta-base': (\"Model name 'shauryr/arqmath-roberta-base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'shauryr/arqmath-roberta-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'shauryr/checkpoint-475000': (\"Model name 'shauryr/checkpoint-475000' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'shauryr/checkpoint-475000' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'shoarora/alectra-small-owt': (\"Model name 'shoarora/alectra-small-owt' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'shoarora/alectra-small-owt' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'spentaur/yelp': (\"Model name 'spentaur/yelp' was not found in tokenizers model name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-cased, distilbert-base-cased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). We assumed 'spentaur/yelp' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/blenderbot-3B': ('blenderbot',),\r\n 'sshleifer/cnn_student_d6': (\"Model name 'sshleifer/cnn_student_d6' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/cnn_student_d6' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/mbart-large-cc25': (\"Model name 'sshleifer/mbart-large-cc25' was not found in tokenizers model name list (facebook/mbart-large-en-ro, facebook/mbart-large-cc25). We assumed 'sshleifer/mbart-large-cc25' was a path, a model identifier, or url to a directory containing vocabulary files named ['sentencepiece.bpe.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/mbart-large-en-ro': (\"Model name 'sshleifer/mbart-large-en-ro' was not found in tokenizers model name list (facebook/mbart-large-en-ro, facebook/mbart-large-cc25). We assumed 'sshleifer/mbart-large-en-ro' was a path, a model identifier, or url to a directory containing vocabulary files named ['sentencepiece.bpe.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/student_cnn_12_3': (\"Model name 'sshleifer/student_cnn_12_3' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_cnn_12_3' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/student_cnn_12_6': (\"Model name 'sshleifer/student_cnn_12_6' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_cnn_12_6' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/student_cnn_12_9': (\"Model name 'sshleifer/student_cnn_12_9' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_cnn_12_9' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/student_cnn_6_6': (\"Model name 'sshleifer/student_cnn_6_6' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_cnn_6_6' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/student_cnn_9_12': (\"Model name 'sshleifer/student_cnn_9_12' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_cnn_9_12' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/student_cnn_9_9': (\"Model name 'sshleifer/student_cnn_9_9' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_cnn_9_9' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/student_xsum_12_3': (\"Model name 'sshleifer/student_xsum_12_3' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_12_3' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/student_xsum_12_4': (\"Model name 'sshleifer/student_xsum_12_4' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_12_4' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/student_xsum_12_6': (\"Model name 'sshleifer/student_xsum_12_6' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_12_6' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/student_xsum_12_9': (\"Model name 'sshleifer/student_xsum_12_9' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_12_9' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/student_xsum_3_12': (\"Model name 'sshleifer/student_xsum_3_12' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_3_12' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/student_xsum_6_12': (\"Model name 'sshleifer/student_xsum_6_12' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_6_12' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/student_xsum_6_6': (\"Model name 'sshleifer/student_xsum_6_6' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_6_6' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/student_xsum_9_12': (\"Model name 'sshleifer/student_xsum_9_12' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_9_12' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/student_xsum_9_9': (\"Model name 'sshleifer/student_xsum_9_9' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/student_xsum_9_9' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/t5-base-cnn': (\"Model name 'sshleifer/t5-base-cnn' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 'sshleifer/t5-base-cnn' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'sshleifer/tinier_bart': (\"Model name 'sshleifer/tinier_bart' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/tinier_bart' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'subbareddyiiit/iiit': ('Unrecognized model in subbareddyiiit/iiit. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'subbareddyiiit/tftelugu': ('Unrecognized model in subbareddyiiit/tftelugu. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),\r\n 'voidful/albert_chinese_base': (\"Model name 'voidful/albert_chinese_base' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_base' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'voidful/albert_chinese_large': (\"Model name 'voidful/albert_chinese_large' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_large' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'voidful/albert_chinese_small': (\"Model name 'voidful/albert_chinese_small' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_small' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'voidful/albert_chinese_tiny': (\"Model name 'voidful/albert_chinese_tiny' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'voidful/albert_chinese_xlarge': (\"Model name 'voidful/albert_chinese_xlarge' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_xlarge' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'voidful/albert_chinese_xxlarge': (\"Model name 'voidful/albert_chinese_xxlarge' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_xxlarge' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\",),\r\n 'wptoux/albert-chinese-large-qa': ('not a string',)}\r\n```",
"For reference #3359",
"Yes thanks for linking this @patrickvonplaten (I intended to look for this as well)\r\n\r\nThe model pages for those models should already display a (more or less) descriptive message (e.g. https://huggingface.co/djstrong/bg_cs_pl_ru_cased_L-12_H-768_A-12) so I believe we can close this.",
"the problem of denpa92/bert-base-cantonese is not solved.",
"Is there some way for us to, like, change the config file and make a pull request? I'm not 100% sure how to find the Adam Lin that added ClinicalBert_all_notes and ask him to change it himself...",
"> Is there some way for us to, like, change the config file and make a pull request? I'm not 100% sure how to find the Adam Lin that added ClinicalBert_all_notes and ask him to change it himself...\r\n\r\nI think we would like to enable pull requests on model repositories (cc @julien-c)",
"Great to hear, @patrickvonplaten. And sorry for the naive question, but where would I find these repos? I've tried searching around a bit for ClinicalBert_all_notes and I've yet to find it on GitHub...",
"@drussellmrichie, on the model hub :) https://huggingface.co/models"
] | 1,593 | 1,629 | 1,593 | CONTRIBUTOR | null | Here is what I ran:
```python
from transformers.hf_api import HfApi
from tqdm import tqdm
import pandas as pd
model_list = HfApi().model_list()
model_ids = [x.modelId for x in model_list]
from transformers import AutoConfig
def check_hub(cls, model_ids):
results = {}
failure_data = {}
for m in tqdm(model_ids):
try:
cls.from_pretrained(m)
results[m] = True
except Exception as e:
failure_data[m] = e.args
results[m] = False
return results, failure_data
results, failure_data = check_hub(AutoConfig, model_ids)
print(failure_data)
```
Results:
```python
{'DeBERTa/base': ('Unrecognized model in DeBERTa/base. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'DeBERTa/large': ('Unrecognized model in DeBERTa/large. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'Itcast/cnc_output': ('Unrecognized model in Itcast/cnc_output. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'Narsil/fr_pretrained': ('Unrecognized model in Narsil/fr_pretrained. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'Narsil/pretrained': ('Unrecognized model in Narsil/pretrained. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'Narsil/pretrained2': ('Unrecognized model in Narsil/pretrained2. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'abryee/TigXLNet': ('`d_head` (64) should be equal to `d_model // n_head` (48)',),
'adamlin/ClinicalBert_all_notes': ('Unrecognized model in adamlin/ClinicalBert_all_notes. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'adamlin/ClinicalBert_disch': ('Unrecognized model in adamlin/ClinicalBert_disch. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'adamlin/NCBI_BERT_pubmed_mimic_uncased_base_transformers': ('Unrecognized model in adamlin/NCBI_BERT_pubmed_mimic_uncased_base_transformers. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers': ('Unrecognized model in adamlin/NCBI_BERT_pubmed_mimic_uncased_large_transformers. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'dccuchile/cased': ('Unrecognized model in dccuchile/cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'dccuchile/uncased': ('Unrecognized model in dccuchile/uncased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'djstrong/bg_cs_pl_ru_cased_L-12_H-768_A-12': ('Unrecognized model in djstrong/bg_cs_pl_ru_cased_L-12_H-768_A-12. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'facebook/dpr-ctx_encoder-single-nq-base': ('dpr',),
'facebook/dpr-question_encoder-single-nq-base': ('dpr',),
'facebook/dpr-reader-single-nq-base': ('dpr',),
'healx/gpt-2-pubmed-large': ('Unrecognized model in healx/gpt-2-pubmed-large. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'healx/gpt-2-pubmed-medium': ('Unrecognized model in healx/gpt-2-pubmed-medium. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'hfl/rbt3': ('Unrecognized model in hfl/rbt3. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'hfl/rbtl3': ('Unrecognized model in hfl/rbtl3. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'microsoft/unilm-base-cased': ('Unrecognized model in microsoft/unilm-base-cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'microsoft/unilm-large-cased': ('Unrecognized model in microsoft/unilm-large-cased. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'mrm8488/prunebert-base-uncased-finepruned-topK-squadv2': ('masked_bert',),
'mrm8488/prunebert-multi-uncased-finepruned-l0-reg-tydiqa-for-xqa': ('masked_bert',),
'mrm8488/prunebert-multi-uncased-finepruned-magnitude-tydiqa-for-xqa': ('masked_bert',),
'mrm8488/prunebert-multi-uncased-finepruned-soft-movement-tydiqa-for-xqa': ('masked_bert',),
'mrm8488/prunebert-multi-uncased-finepruned-topK-tydiqa-for-xqa': ('masked_bert',),
'mrm8488/prunebert-multi-uncased-finepruned-tydiqa-for-xqa': ('masked_bert',),
'oda/music5': ('Unrecognized model in oda/music5. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'pertschuk/0_RoBERTa': ('Unrecognized model in pertschuk/0_RoBERTa. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'radha1258/save': ('Unrecognized model in radha1258/save. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'sshleifer/blenderbot-3B': ('blenderbot',),
'subbareddyiiit/iiit': ('Unrecognized model in subbareddyiiit/iiit. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',),
'subbareddyiiit/tftelugu': ('Unrecognized model in subbareddyiiit/tftelugu. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder',)}
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5475/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5475/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5474 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5474/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5474/comments | https://api.github.com/repos/huggingface/transformers/issues/5474/events | https://github.com/huggingface/transformers/issues/5474 | 650,172,589 | MDU6SXNzdWU2NTAxNzI1ODk= | 5,474 | Can't use AutoModelForCausalLM with bert | {
"login": "sshearing",
"id": 19912805,
"node_id": "MDQ6VXNlcjE5OTEyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/19912805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshearing",
"html_url": "https://github.com/sshearing",
"followers_url": "https://api.github.com/users/sshearing/followers",
"following_url": "https://api.github.com/users/sshearing/following{/other_user}",
"gists_url": "https://api.github.com/users/sshearing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshearing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshearing/subscriptions",
"organizations_url": "https://api.github.com/users/sshearing/orgs",
"repos_url": "https://api.github.com/users/sshearing/repos",
"events_url": "https://api.github.com/users/sshearing/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshearing/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Can reproduce :-) Opened a PR to fix it - thanks for the issue @sshearing !"
] | 1,593 | 1,594 | 1,594 | NONE | null | # ๐ Bug
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] my own modified scripts: (give details below)
Here is a simple 3 lines of code you can try to replicate the bug:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('bert-base-uncased')
model = AutoModelForCausalLM.from_pretrained('bert-base-uncased', is_decoder=True)
The tasks I am working on is:
XSUM / CNNDM summarization
## To reproduce
Steps to reproduce the behavior:
1. run the first 2 lines of code I put in the script section
2. run the first and third line of code I put in the script section
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
If you run the second line of code, you get:
AssertionError: If you want to use `BertLMHeadModel` as a standalone, add `is_decoder=True`.
If you run the third line of code (add is_decoder=True), you get:
TypeError: __init__() got an unexpected keyword argument 'is_decoder'
The first error occurs because it creates a default bert-base-uncased config, which does not set is_decoder to True. This is reasonable behavior.
The second error occurs because when you pass in is_decoder=True, it correctly gets added to the config, but is incorrectly passed to the model __init__. In this case, BertLMHeadModel's init ONLY takes a config - it does not accept ANY kwargs. Thus we crash. I don't think this is intended behavior - I feel like its reasonable to think you can pass in is_decoder to the config you want to create in AutoModelForCausalLM without crashing.
## Expected behavior
I expect if I run the code AutoModelForCausalLM('bert-base-uncased'), I will get back a BertLMHeadModel back with the is_decoder flag set to true in the config. Alternatively, I expect if I run the code AutoModelForCausalLM('bert-base-uncased', is_decoder=True) to get the same result.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.0
- Platform: Linux-3.10.0-862.14.4.el7.x86_64-x86_64-with-centos-7.5.1804-Core
- Python version: 3.7.3
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: tried with both
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5474/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5473 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5473/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5473/comments | https://api.github.com/repos/huggingface/transformers/issues/5473/events | https://github.com/huggingface/transformers/issues/5473 | 650,149,142 | MDU6SXNzdWU2NTAxNDkxNDI= | 5,473 | TFAutoModelForSequenceClassification: ValueError: Layer #1 (named "classifier") expects 2 weight(s), but the saved weights have 4 element(s). | {
"login": "TheophileBlard",
"id": 37028092,
"node_id": "MDQ6VXNlcjM3MDI4MDky",
"avatar_url": "https://avatars.githubusercontent.com/u/37028092?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheophileBlard",
"html_url": "https://github.com/TheophileBlard",
"followers_url": "https://api.github.com/users/TheophileBlard/followers",
"following_url": "https://api.github.com/users/TheophileBlard/following{/other_user}",
"gists_url": "https://api.github.com/users/TheophileBlard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheophileBlard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheophileBlard/subscriptions",
"organizations_url": "https://api.github.com/users/TheophileBlard/orgs",
"repos_url": "https://api.github.com/users/TheophileBlard/repos",
"events_url": "https://api.github.com/users/TheophileBlard/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheophileBlard/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Thanks for opening this issue. This should have been fixed by https://github.com/huggingface/transformers/pull/5414."
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | # ๐ Bug
## Information
`TFAutoModelForSequenceClassification` does not work on v3.0.0 / can't load a model that was working on v2.11.0
## To reproduce
Steps to reproduce the behavior:
- This work:
```python
!pip install transformers==2.11.0
from transformers import TFAutoModelForSequenceClassification
model = TFAutoModelForSequenceClassification.from_pretrained("tblard/tf-allocine")
```
but not this:
```python
!pip install transformers>=3.0.0
from transformers import TFAutoModelForSequenceClassification
model = TFAutoModelForSequenceClassification.from_pretrained("tblard/tf-allocine")
```
as it outputs:
```shell
ValueError: Layer #1 (named "classifier") expects 2 weight(s), but the saved weights have 4 element(s).
```
- Using `TFCamembertForSequenceClassification` instead of `TFAutoModelForSequenceClassification` also works.
- I couldn't find any other model using `TFAutoModelForSequenceClassification` on the model zoo to verify the issue does not come from the model itself.
## Expected behavior
No errors.
## Environment info
Standard Google colab environment.
- `transformers` version: 3.0.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5473/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5472 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5472/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5472/comments | https://api.github.com/repos/huggingface/transformers/issues/5472/events | https://github.com/huggingface/transformers/pull/5472 | 650,108,665 | MDExOlB1bGxSZXF1ZXN0NDQzNjYzNzkw | 5,472 | Truncation in GLUE should be longest first | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5472?src=pr&el=h1) Report\n> Merging [#5472](https://codecov.io/gh/huggingface/transformers/pull/5472?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/306f1a269504b781f886d75105acabf8ae95bd11&el=desc) will **decrease** coverage by `1.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5472?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5472 +/- ##\n==========================================\n- Coverage 77.86% 76.77% -1.09% \n==========================================\n Files 141 141 \n Lines 24608 24608 \n==========================================\n- Hits 19160 18892 -268 \n- Misses 5448 5716 +268 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5472?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <รธ> (รธ)` | |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (รธ)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.37% <0.00%> (+25.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5472/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5472?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5472?src=pr&el=footer). Last update [306f1a2...5f25ea3](https://codecov.io/gh/huggingface/transformers/pull/5472?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,594 | 1,594 | MEMBER | null | The GLUE example currently crashes with the QQP task because of the truncation.
It outputs the following warnings:
```
ERROR:transformers.tokenization_utils:We need to remove 186 to truncate the inputbut the first sequence has a length 25. Please select another truncation strategy than TruncationStrategy.ONLY_FIRST, for instance 'longest_first' or 'only_second'.
ERROR:transformers.tokenization_utils:We need to remove 49 to truncate the inputbut the first sequence has a length 34. Please select another truncation strategy than TruncationStrategy.ONLY_FIRST, for instance 'longest_first' or 'only_second'.
ERROR:transformers.tokenization_utils:We need to remove 203 to truncate the inputbut the first sequence has a length 42. Please select another truncation strategy than TruncationStrategy.ONLY_FIRST, for instance 'longest_first' or 'only_second'.
ERROR:transformers.tokenization_utils:We need to remove 39 to truncate the inputbut the first sequence has a length 28. Please select another truncation strategy than TruncationStrategy.ONLY_FIRST, for instance 'longest_first' or 'only_second'.
ERROR:transformers.tokenization_utils:We need to remove 23 to truncate the inputbut the first sequence has a length 20. Please select another truncation strategy than TruncationStrategy.ONLY_FIRST, for instance 'longest_first' or 'only_second'.
ERROR:transformers.tokenization_utils:We need to remove 91 to truncate the inputbut the first sequence has a length 63. Please select another truncation strategy than TruncationStrategy.ONLY_FIRST, for instance 'longest_first' or 'only_second'.
```
before crashing with the following:
```
ValueError: expected sequence of length 128 at dim 1 (got 202)
```
closes https://github.com/huggingface/transformers/issues/5460 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5472/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5472",
"html_url": "https://github.com/huggingface/transformers/pull/5472",
"diff_url": "https://github.com/huggingface/transformers/pull/5472.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5472.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5471 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5471/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5471/comments | https://api.github.com/repos/huggingface/transformers/issues/5471/events | https://github.com/huggingface/transformers/pull/5471 | 650,105,871 | MDExOlB1bGxSZXF1ZXN0NDQzNjYxNTA0 | 5,471 | Update: ElectraDiscriminatorPredictions forward. | {
"login": "shenfe",
"id": 22103866,
"node_id": "MDQ6VXNlcjIyMTAzODY2",
"avatar_url": "https://avatars.githubusercontent.com/u/22103866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shenfe",
"html_url": "https://github.com/shenfe",
"followers_url": "https://api.github.com/users/shenfe/followers",
"following_url": "https://api.github.com/users/shenfe/following{/other_user}",
"gists_url": "https://api.github.com/users/shenfe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shenfe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shenfe/subscriptions",
"organizations_url": "https://api.github.com/users/shenfe/orgs",
"repos_url": "https://api.github.com/users/shenfe/repos",
"events_url": "https://api.github.com/users/shenfe/events{/privacy}",
"received_events_url": "https://api.github.com/users/shenfe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5471?src=pr&el=h1) Report\n> Merging [#5471](https://codecov.io/gh/huggingface/transformers/pull/5471?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/13a8588f2d70fe78dc36d84829c04fa9d39572d1&el=desc) will **increase** coverage by `1.14%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5471?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5471 +/- ##\n==========================================\n+ Coverage 76.77% 77.92% +1.14% \n==========================================\n Files 141 141 \n Lines 24617 24617 \n==========================================\n+ Hits 18900 19183 +283 \n+ Misses 5717 5434 -283 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5471?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `80.62% <100.00%> (รธ)` | |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.47% <0.00%> (-49.57%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (รธ)` | |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.21% <0.00%> (+1.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5471/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5471?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5471?src=pr&el=footer). Last update [13a8588...42044b4](https://codecov.io/gh/huggingface/transformers/pull/5471?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | `ElectraDiscriminatorPredictions.forward` should not need `attention_mask`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5471/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5471",
"html_url": "https://github.com/huggingface/transformers/pull/5471",
"diff_url": "https://github.com/huggingface/transformers/pull/5471.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5471.patch",
"merged_at": 1593712653000
} |
https://api.github.com/repos/huggingface/transformers/issues/5470 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5470/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5470/comments | https://api.github.com/repos/huggingface/transformers/issues/5470/events | https://github.com/huggingface/transformers/issues/5470 | 650,072,353 | MDU6SXNzdWU2NTAwNzIzNTM= | 5,470 | Unable to use run_squad with xla_spawn.py on TPU | {
"login": "dhruvluci",
"id": 39732134,
"node_id": "MDQ6VXNlcjM5NzMyMTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39732134?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhruvluci",
"html_url": "https://github.com/dhruvluci",
"followers_url": "https://api.github.com/users/dhruvluci/followers",
"following_url": "https://api.github.com/users/dhruvluci/following{/other_user}",
"gists_url": "https://api.github.com/users/dhruvluci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhruvluci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhruvluci/subscriptions",
"organizations_url": "https://api.github.com/users/dhruvluci/orgs",
"repos_url": "https://api.github.com/users/dhruvluci/repos",
"events_url": "https://api.github.com/users/dhruvluci/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhruvluci/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! The SQuAD example doesn't have trainer support yet. We're in the process of adding it. You can see the supported tasks [here](https://github.com/huggingface/transformers/tree/master/examples#the-big-table-of-tasks), only the tasks with Trainer, TFTrainer or pytorch-lightning support can run on TPU."
] | 1,593 | 1,593 | 1,593 | NONE | null | # ๐ Bug
## Information
Model I am using (Bert, XLNet ...):
Electra
Language I am using the model on (English, Chinese ...):
ENG
The problem arises when using:
* [ ] the official example scripts: RUN_squad.py + xla_spawn.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name): official SQUaD task
## To reproduce
Steps to reproduce the behavior:
1. Install pytorch-xla on colab using:
```
VERSION = "20200325" #@param ["1.5" , "20200325", "nightly"]
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python pytorch-xla-env-setup.py --version $VERSION
```
2. Trying to run_squad on colab TPUs using xla_spawn.py
```
python examples/xla_spawn.py --num_cores 8 \
examples/question-answering/run_squad.py \
--model_type electra \
--model_name_or_path google/electra-base-discriminator \
--do_train \
--do_eval \
--do_lower_case \
--train_file "/content/drive/My Drive/bert/train.json" \
--predict_file "/content/drive/My Drive/bert/val.json" \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir "/content/drive/My Drive/bert/newdir6"
```
2. Error is thrown up
```
Traceback (most recent call last):
File "examples/xla_spawn.py", line 72, in <module>
main()
File "examples/xla_spawn.py", line 68, in main
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
AttributeError: module 'run_squad' has no attribute '_mp_fn'
```
## Expected behavior
Training should run properly using xla_spawn.py, which it does for GLUE tasks using:
```
python examples/xla_spawn.py --num_cores 8 \
examples/text-classification/run_glue.py
```
## Environment info
- `transformers` version: 2nd July 2020 clone.
- Platform: Colab
- Python version: 3.8
- PyTorch version (GPU?): 20200325 (pytorch-xla)
- Tensorflow version (GPU?):N
- Using GPU in script?:N
- Using distributed or parallel set-up in script?:N
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5470/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5469 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5469/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5469/comments | https://api.github.com/repos/huggingface/transformers/issues/5469/events | https://github.com/huggingface/transformers/pull/5469 | 650,059,684 | MDExOlB1bGxSZXF1ZXN0NDQzNjIzMTc1 | 5,469 | [Discussion] fix zero divison error (Reformer batch size bug) | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | This PR is for discussion
During the training of the reformer model, I noticed that when you increase the batch size, a zero divison error often occurs
With these changes the error no longer occurs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5469/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5469",
"html_url": "https://github.com/huggingface/transformers/pull/5469",
"diff_url": "https://github.com/huggingface/transformers/pull/5469.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5469.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5468 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5468/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5468/comments | https://api.github.com/repos/huggingface/transformers/issues/5468/events | https://github.com/huggingface/transformers/pull/5468 | 650,006,429 | MDExOlB1bGxSZXF1ZXN0NDQzNTc5MjYy | 5,468 | Fix saved model creation | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5468?src=pr&el=h1) Report\n> Merging [#5468](https://codecov.io/gh/huggingface/transformers/pull/5468?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5a0dac53bfd6e69ae64fb3119d607445e1a308d8&el=desc) will **increase** coverage by `0.34%`.\n> The diff coverage is `93.30%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5468?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5468 +/- ##\n==========================================\n+ Coverage 79.33% 79.67% +0.34% \n==========================================\n Files 146 146 \n Lines 26611 26582 -29 \n==========================================\n+ Hits 21111 21180 +69 \n+ Misses 5500 5402 -98 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5468?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.50% <0.00%> (-0.60%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <16.66%> (-63.98%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <71.42%> (-1.85%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.87% <88.23%> (+0.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `76.73% <89.28%> (+0.26%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `91.34% <95.83%> (-0.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.75% <96.15%> (-0.03%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.93% <100.00%> (-0.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.84% <100.00%> (-0.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.78% <100.00%> (+34.60%)` | :arrow_up: |\n| ... and [20 more](https://codecov.io/gh/huggingface/transformers/pull/5468/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5468?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5468?src=pr&el=footer). Last update [5a0dac5...69ea0de](https://codecov.io/gh/huggingface/transformers/pull/5468?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Sure! Will try to find a proper test for that.",
"Awesome! Fearing many merge conflicts with https://github.com/huggingface/transformers/pull/5395#pullrequestreview-442374095 :D ",
"I fear the same! We should wait to merge #5395 before to merge this one.",
"CirlceCi gives the following message error `Too long with no output (exceeded 10m0s): context deadline exceeded` does it means that the tests takes too long now?",
"oof yeah, that's what it means! Do you have any idea of how long the added tests take in your local environment, compared to the full test suite?",
"I get around 1.5 min per new test / per model => 3min per model => ~33min but this is on my laptop which is really cheap",
"That's a slow test :) We can mark them as slow for now (using the `@slow` decorator) and monitor how long they take. If they take too long, we'll have to think of a different way to test those.",
"Ok good to me",
"Still have to do some bugfix and once all the models pass the tests I will put the `@slow` decorator.",
"Ok, now all the models can be saved in TF saved model format. I put some tests to be sure of that, but they have the `@slow` decorator.\r\n\r\nThis is good to merge to me. Nevertheless, I have done several changes in the input of several layers, @LysandreJik can you check if you are ok with that?\r\n\r\nBasically, saved models cannot be run with `list / tuple` inputs, because this is very Python specific and cannot be translated into gRPC.",
"@jplu Thanks for your work first! The transformers-based model now can be served by Tensorflow Serving. But I still have one question about max_seg_length. \r\nIn order to make inference faster, is it possible to set max_length to be None? In tf 1.x, I can use the following codes to make serving accept dynamic max_seq_length.\r\n```python\r\nestimator = ...\r\n\r\ndef serving_input_receiver_fn(max_seq_len):\r\n input_ids = tf.compat.v1.placeholder(shape=[None, max_seq_len], dtype=tf.int32, name='input_ids')\r\n input_mask = tf.compat.v1.placeholder(shape=[None, max_seq_len], dtype=tf.int32, name='input_mask')\r\n segment_ids = tf.compat.v1.placeholder(shape=[None, max_seq_len], dtype=tf.int32, name='segment_ids')\r\n features = {'input_ids': input_ids, 'input_mask': input_mask, 'segment_ids': segment_ids}\r\n return tf.estimator.export.build_raw_serving_input_receiver_fn(features)\r\n\r\nestimator.export_savedmodel(model_output_dir, serving_input_receiver_fn(None))\r\n```\r\nI found following codes work in TF2.x. Just ignore this message.\r\n```python\r\ninput_feature = {\r\n 'input_ids': tf.TensorSpec(shape=(None, None), dtype=tf.int32, name='input_ids'),\r\n 'token_type_ids': tf.TensorSpec(shape=(None, None), dtype=tf.int32, name='token_type_ids'),\r\n 'attention_mask': tf.TensorSpec(shape=(None, None), dtype=tf.int32, name='attention_mask')\r\n}\r\nmodel._set_save_spec(input_feature)\r\n```\r\n",
"Hello! Thanks for you suggestion. The saved model creation is postponed, and will be for a next PR. This one is here for bugfix only.\n\nYour code will work fine, but not for all the models and tasks. For example token classification uses a different shape, and DistilBert doesn't have a token_type_ids. Unfortunately, it is a bit more complicated than just putting this piece of code somewhere, it has to be task and model independant.",
"@jplu What are the issues in deleting `cast_bool_to_primitive` altogether?",
"T5 will not work anymore because the number of output depends on the use_cache parameter. And for now we still want to keep a variable length output.\r\n\r\nWe are currently reworking the outputs approach of all the models the output dictionaries instead of tuples. I will come back on this issue of boolean tensor once this new type of output will be available.",
"Indeed, I think this is fine, mostly because I expect people to hack around the hidden layers mostly for the PyTorch implementations.\r\n\r\nGood for me!",
"This is a valuable point indeed @LysandreJik! Nevertheless, unpacking data is not compliant with TensorFlow Autograph as far as I know, basically we loose this usage as it was before.",
"Alright, sounds good! Could you resolve the merge conflict and then we merge?",
"Fixed!"
] | 1,593 | 1,599 | 1,596 | CONTRIBUTOR | null | Fix a bug when a saved model is created the parameters `output_hidden_states` and `output_attentions` was ignored.
Reproducibility with TF 2.2:
```python
import tensorflow as tf
from transformers import TFBertModel, BertTokenizer, BertConfig
config = BertConfig.from_pretrained("bert-base-multilingual-uncased", output_hidden_states=True)
model = TFBertModel.from_pretrained('bert-base-multilingual-uncased', config=config)
tokenizer = BertTokenizer.from_pretrained("bert-base-multilingual-uncased", use_fast=False)
features = tokenizer.encode_plus("Hello world.", add_special_tokens=True, max_length=48, pad_to_max_length=True, return_tensors="tf", truncation=True)
model._saved_model_inputs_spec = None
model._set_save_spec(dict(features))
tf.saved_model.save(model, "save/model")
```
Then run the serving CLI with:
```
saved_model_cli show --dir save/model/ --tag_set serve --signature_def serving_default
```
There will be only 2 outputs instead of 3. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5468/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5468",
"html_url": "https://github.com/huggingface/transformers/pull/5468",
"diff_url": "https://github.com/huggingface/transformers/pull/5468.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5468.patch",
"merged_at": 1596456640000
} |
https://api.github.com/repos/huggingface/transformers/issues/5467 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5467/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5467/comments | https://api.github.com/repos/huggingface/transformers/issues/5467/events | https://github.com/huggingface/transformers/pull/5467 | 649,970,650 | MDExOlB1bGxSZXF1ZXN0NDQzNTQ5OTA5 | 5,467 | Tokenizer summary | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5467?src=pr&el=h1) Report\n> Merging [#5467](https://codecov.io/gh/huggingface/transformers/pull/5467?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/35befd9ce31c23a774fd34f57bc44033ce70141d&el=desc) will **decrease** coverage by `0.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5467?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5467 +/- ##\n==========================================\n- Coverage 77.57% 77.48% -0.09% \n==========================================\n Files 141 141 \n Lines 24581 24581 \n==========================================\n- Hits 19068 19046 -22 \n- Misses 5513 5535 +22 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5467?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.42% <0.00%> (+1.50%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5467?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5467?src=pr&el=footer). Last update [35befd9...80529fa](https://codecov.io/gh/huggingface/transformers/pull/5467?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | COLLABORATOR | null | This PR introduces a mid/high-level summary of the different tokenizer types used in the library (a bit like the model summary).
Preview is [here](https://56179-155220641-gh.circle-artifacts.com/0/docs/_build/html/tokenizer_summary.html). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5467/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5467",
"html_url": "https://github.com/huggingface/transformers/pull/5467",
"diff_url": "https://github.com/huggingface/transformers/pull/5467.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5467.patch",
"merged_at": 1593724063000
} |
https://api.github.com/repos/huggingface/transformers/issues/5466 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5466/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5466/comments | https://api.github.com/repos/huggingface/transformers/issues/5466/events | https://github.com/huggingface/transformers/pull/5466 | 649,924,506 | MDExOlB1bGxSZXF1ZXN0NDQzNTExNDg1 | 5,466 | Fix typo in glossary | {
"login": "eigenfoo",
"id": 19851673,
"node_id": "MDQ6VXNlcjE5ODUxNjcz",
"avatar_url": "https://avatars.githubusercontent.com/u/19851673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eigenfoo",
"html_url": "https://github.com/eigenfoo",
"followers_url": "https://api.github.com/users/eigenfoo/followers",
"following_url": "https://api.github.com/users/eigenfoo/following{/other_user}",
"gists_url": "https://api.github.com/users/eigenfoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eigenfoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eigenfoo/subscriptions",
"organizations_url": "https://api.github.com/users/eigenfoo/orgs",
"repos_url": "https://api.github.com/users/eigenfoo/repos",
"events_url": "https://api.github.com/users/eigenfoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/eigenfoo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5466/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5466",
"html_url": "https://github.com/huggingface/transformers/pull/5466",
"diff_url": "https://github.com/huggingface/transformers/pull/5466.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5466.patch",
"merged_at": 1593695974000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5465 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5465/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5465/comments | https://api.github.com/repos/huggingface/transformers/issues/5465/events | https://github.com/huggingface/transformers/pull/5465 | 649,814,338 | MDExOlB1bGxSZXF1ZXN0NDQzNDIwMzYw | 5,465 | Fixing missing arguments for TransfoXL tokenizer when using TextGenerationPipeline | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5465?src=pr&el=h1) Report\n> Merging [#5465](https://codecov.io/gh/huggingface/transformers/pull/5465?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6726416e4a9780e7a92b5681e1446f15f7ef83d3&el=desc) will **decrease** coverage by `0.12%`.\n> The diff coverage is `85.71%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5465?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5465 +/- ##\n==========================================\n- Coverage 77.52% 77.40% -0.13% \n==========================================\n Files 141 141 \n Lines 24610 24617 +7 \n==========================================\n- Hits 19079 19054 -25 \n- Misses 5531 5563 +32 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5465?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5465/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.00% <85.71%> (+0.11%)` | :arrow_up: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5465/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5465/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5465/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5465/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.40% <0.00%> (+0.71%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5465/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5465/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+33.33%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5465?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5465?src=pr&el=footer). Last update [6726416...25f8c86](https://codecov.io/gh/huggingface/transformers/pull/5465?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | As discussed with @LysandreJik and @mfuntowicz , `TextGenerationPipeline` gives imperfect results when using TransfoXL as the tokenizer lacks the `add_space_before_punct_symbol` argument. In order to fix this, this PR overrides `_parse_and_tokenize` for this pipeline in order to pass tokenizer arguments. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5465/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5465",
"html_url": "https://github.com/huggingface/transformers/pull/5465",
"diff_url": "https://github.com/huggingface/transformers/pull/5465.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5465.patch",
"merged_at": 1593690813000
} |
https://api.github.com/repos/huggingface/transformers/issues/5464 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5464/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5464/comments | https://api.github.com/repos/huggingface/transformers/issues/5464/events | https://github.com/huggingface/transformers/pull/5464 | 649,811,219 | MDExOlB1bGxSZXF1ZXN0NDQzNDE3ODE0 | 5,464 | Create model card | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | Create model card for electra-small-discriminator fine-tuned on SQUAD v2.0 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5464/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5464",
"html_url": "https://github.com/huggingface/transformers/pull/5464",
"diff_url": "https://github.com/huggingface/transformers/pull/5464.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5464.patch",
"merged_at": 1593771590000
} |
https://api.github.com/repos/huggingface/transformers/issues/5463 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5463/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5463/comments | https://api.github.com/repos/huggingface/transformers/issues/5463/events | https://github.com/huggingface/transformers/issues/5463 | 649,788,247 | MDU6SXNzdWU2NDk3ODgyNDc= | 5,463 | Pre-Trained Model (ipuneetrathore/bert-base-cased-finetuned-finBERT) loads in PyTorch but not Tensorflow | {
"login": "turmeric-blend",
"id": 62788745,
"node_id": "MDQ6VXNlcjYyNzg4NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/62788745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/turmeric-blend",
"html_url": "https://github.com/turmeric-blend",
"followers_url": "https://api.github.com/users/turmeric-blend/followers",
"following_url": "https://api.github.com/users/turmeric-blend/following{/other_user}",
"gists_url": "https://api.github.com/users/turmeric-blend/gists{/gist_id}",
"starred_url": "https://api.github.com/users/turmeric-blend/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/turmeric-blend/subscriptions",
"organizations_url": "https://api.github.com/users/turmeric-blend/orgs",
"repos_url": "https://api.github.com/users/turmeric-blend/repos",
"events_url": "https://api.github.com/users/turmeric-blend/events{/privacy}",
"received_events_url": "https://api.github.com/users/turmeric-blend/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! That's because the user that uploaded that model didn't upload a TensorFlow version, only a PyTorch version. You can see it when you click on \"show all files\", you'll see that there is a `pytorch_model.bin`, but no `tf_model.h5`.\r\n\r\nHere you can solve this by telling the TF model that you want to load from pytorch weights:\r\n\r\n```py\r\nimport tensorflow as tf\r\nPRE_TRAINED_MODEL_NAME = 'ipuneetrathore/bert-base-cased-finetuned-finBERT'\r\nmodel = TFBertForSequenceClassification.from_pretrained(PRE_TRAINED_MODEL_NAME, from_pt=True) # <-- here\r\n```",
"and you could also ask the author (I believe @ipuneetrathore) if they could upload a TF version of the weights",
"hi @julien-c just wondering if there are any difference if the pytorch weights could be loaded through TF model anyway?",
"Just that the PyTorch weights will have to be converted on the fly every time you instantiate your TF model"
] | 1,593 | 1,594 | 1,593 | NONE | null | # ๐ Bug
## Information
Model I am using (Bert, XLNet ...): TFBertModel
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
This Works:
```
import torch
PRE_TRAINED_MODEL_NAME = 'ipuneetrathore/bert-base-cased-finetuned-finBERT'
model = BertForSequenceClassification.from_pretrained(PRE_TRAINED_MODEL_NAME)
# loads just fine
```
This Does NOT Work:
```
import tensorflow as tf
PRE_TRAINED_MODEL_NAME = 'ipuneetrathore/bert-base-cased-finetuned-finBERT'
model = TFBertForSequenceClassification.from_pretrained(PRE_TRAINED_MODEL_NAME)
# ERROR:
OSError: Can't load weights for 'ipuneetrathore/bert-base-cased-finetuned-finBERT'. Make sure that:
- 'ipuneetrathore/bert-base-cased-finetuned-finBERT' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'ipuneetrathore/bert-base-cased-finetuned-finBERT' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
It should load the model.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.0
- Platform: Ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): 1.3.1 with GPU
- Tensorflow version (GPU?): 2.1.0 with GPU
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5463/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5462 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5462/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5462/comments | https://api.github.com/repos/huggingface/transformers/issues/5462/events | https://github.com/huggingface/transformers/pull/5462 | 649,779,491 | MDExOlB1bGxSZXF1ZXN0NDQzMzkyMDQ5 | 5,462 | Changed expected_output_ids in TransfoXL generation test | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5462?src=pr&el=h1) Report\n> Merging [#5462](https://codecov.io/gh/huggingface/transformers/pull/5462?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/35befd9ce31c23a774fd34f57bc44033ce70141d&el=desc) will **decrease** coverage by `0.93%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5462?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5462 +/- ##\n==========================================\n- Coverage 77.57% 76.63% -0.94% \n==========================================\n Files 141 141 \n Lines 24581 24581 \n==========================================\n- Hits 19068 18838 -230 \n- Misses 5513 5743 +230 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5462?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.40% <0.00%> (+0.71%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.68% <0.00%> (+2.76%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5462/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5462?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5462?src=pr&el=footer). Last update [35befd9...89278a5](https://codecov.io/gh/huggingface/transformers/pull/5462?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | #4826 fixed TransfoXL's `prepare_inputs_for_generation` function. This PR changes the expected outputs in the TransfoXL generation test to match the new correct outputs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5462/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5462",
"html_url": "https://github.com/huggingface/transformers/pull/5462",
"diff_url": "https://github.com/huggingface/transformers/pull/5462.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5462.patch",
"merged_at": 1593683804000
} |
https://api.github.com/repos/huggingface/transformers/issues/5461 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5461/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5461/comments | https://api.github.com/repos/huggingface/transformers/issues/5461/events | https://github.com/huggingface/transformers/issues/5461 | 649,766,428 | MDU6SXNzdWU2NDk3NjY0Mjg= | 5,461 | [Reformer] combine reformer model with other tokenizers | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm not sure I completely understand your process. You're loading a Reformer model - which one, with which checkpoint?\r\n\r\nYou want to use another tokenizer. Which one, loaded from which checkpoint?",
"I am using this notebook: https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb\r\nThe tokenizer I am using is the t5-large from the modelhub.\r\n\r\nRestarting my complete System solved the problem, seems like there was an error with cuda or the anaconda enviroment"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | # ๐ Bug
## Information
Model I am using (Bert, XLNet ...): reformer
Language I am using the model on (English, Chinese ...): english
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below) it's based on the reformer mlm notebook, with tokenizer from t5 or roberta
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below) masked language modeling
## To reproduce
Steps to reproduce the behavior:
1. replace the tokenizer from reformer with t5 or roberta
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
<pre><code>
File "train_reformer.py", line 163, in <module>
trainer.train()
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/trainer.py", line 499, in train
tr_loss += self._training_step(model, inputs, optimizer)
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/trainer.py", line 622, in _training_step
outputs = model(**inputs)
File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/a-ware/.local/lib/python3.8/site-packages/apex/amp/_initialize.py", line 196, in new_fwd
output = old_fwd(*applier(args, input_caster),
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 1853, in forward
reformer_outputs = self.reformer(
File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 1623, in forward
encoder_outputs = self.encoder(
File "/home/a-ware/.local/lib/python3.8/site-packages/torch/n
wandb: Waiting for W&B process to finish, PID 142384
n/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 1368, in forward
hidden_states = _ReversibleFunction.apply(
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 1267, in forward
layer_outputs = layer(
File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 1145, in forward
attn_outputs = self.attention(
File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 10wandb: Program failed with code 1. Press ctrl-c to abort syncing.
06, in forward
self_attention_outputs = self.self_attention(
File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 805, in forward
query_vectors = self.query(hidden_states)
File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/a-ware/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 1612, in linear
output = input.matmul(weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
wandb: Process crashed early, not syncing files
</code></pre>
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master
- Platform:
- Python version: 3.8
- PyTorch version (GPU?): 1.4
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5461/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5460 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5460/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5460/comments | https://api.github.com/repos/huggingface/transformers/issues/5460/events | https://github.com/huggingface/transformers/issues/5460 | 649,657,582 | MDU6SXNzdWU2NDk2NTc1ODI= | 5,460 | BERT Huggingface trainer api: ValueError: expected sequence of length 128 at dim 1 (got 314) | {
"login": "quest4next",
"id": 16400458,
"node_id": "MDQ6VXNlcjE2NDAwNDU4",
"avatar_url": "https://avatars.githubusercontent.com/u/16400458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quest4next",
"html_url": "https://github.com/quest4next",
"followers_url": "https://api.github.com/users/quest4next/followers",
"following_url": "https://api.github.com/users/quest4next/following{/other_user}",
"gists_url": "https://api.github.com/users/quest4next/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quest4next/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quest4next/subscriptions",
"organizations_url": "https://api.github.com/users/quest4next/orgs",
"repos_url": "https://api.github.com/users/quest4next/repos",
"events_url": "https://api.github.com/users/quest4next/events{/privacy}",
"received_events_url": "https://api.github.com/users/quest4next/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I can reproduce! Thank you for opening an issue, I'm looking into it now.",
"Same error (StackOverflow--> https://stackoverflow.com/questions/67004233/typeerror-zeros-like-argument-input-when-fine-tuning-on-mlm) what was the fix in the end @LysandreJik ?",
"@LysandreJik Is there any way to fix this (for what I presume is a model pretrained on the old HuggingFace version)?",
"Sorry just seeing this now - @neel04 are you still facing the issue? @msamogh can you open a new issue and fill in the issue template (with full error, environment, code run)? Thanks",
"I don't really remember what I was trying to do :sweat_smile: Sorry, couldn't help you more. I think the problem was some changes in the API, while the example notebooks weren't updated at that time - so the `max_length` argument (which took `int`) didn't work leading to that error. Now, its changed to a `str` whic represents padding strategy - and since it works now with my current problem, I personally don't think the issue remains anymore :hugs: \r\n\r\nI haven't done the training though, so I would surely update you if I encouter it again!\r\n\r\n**EDIT:** My training works (albeit with small batch size), so you must have processed your data wrongly @msamogh. Check out the updated example notebooks to get an idea on how to build your datasets with the :hugs: `Datasets` lib"
] | 1,593 | 1,623 | 1,593 | NONE | null | # โ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I'm using new trainer api in HuggingFace Transformers to train on a GLUE task (QQP). This error shows up during training.
This is the example I'm using https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/trainer/01_text_classification.ipynb
Only change I made is the task. In the notebook GLUE task is MNLI which I changed to QQP. While original MNLI task runs without errors, QQP task fails.
ValueError: expected sequence of length 128 at dim 1 (got 314)
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
Please check my Stackoverflow question for more details
**https://stackoverflow.com/questions/62675482/bert-huggingface-trainer-api-valueerror-expected-sequence-of-length-128-at-dim**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5460/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5459 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5459/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5459/comments | https://api.github.com/repos/huggingface/transformers/issues/5459/events | https://github.com/huggingface/transformers/issues/5459 | 649,642,911 | MDU6SXNzdWU2NDk2NDI5MTE= | 5,459 | Error while saving model: TypeError: ('Not JSON Serializable:', DistilBertConfig | {
"login": "msahamed",
"id": 8838524,
"node_id": "MDQ6VXNlcjg4Mzg1MjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8838524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/msahamed",
"html_url": "https://github.com/msahamed",
"followers_url": "https://api.github.com/users/msahamed/followers",
"following_url": "https://api.github.com/users/msahamed/following{/other_user}",
"gists_url": "https://api.github.com/users/msahamed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/msahamed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/msahamed/subscriptions",
"organizations_url": "https://api.github.com/users/msahamed/orgs",
"repos_url": "https://api.github.com/users/msahamed/repos",
"events_url": "https://api.github.com/users/msahamed/events{/privacy}",
"received_events_url": "https://api.github.com/users/msahamed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! The way to save the transformers model is using the `save_pretrained` method, which saves both the configuration and the model as an h5 file. Can you try using it instead?",
"> Hi! The way to save the transformers model is using the `save_pretrained` method, which saves both the configuration and the model as an h5 file. Can you try using it instead?\r\n\r\nI am not saving the \"transformers model\" instead use it as a top layer of a Keras model. Then error occurs when saving the model that includes the \"transformers model.\"",
"Ok, maybe @jplu or @patrickvonplaten can have a look when they have some bandwidth.",
"As a first glance, I can say that it is \"normal\" because the `DistilBert` model has a config parameter, which doesn't make it compliant with sequential models. Create a subclass model instead to see if it works.\r\n\r\nBut this is just a quick guess, I will check it deeper when have some time.",
"I found that, model could be save in tensorflow saved_model using: \r\n`tf.saved_model.save(model, './models/model')`\r\n\r\nHowever, I was not able to save in Keras .h5 format. That's fine for me now. So, I close this issue. "
] | 1,593 | 1,594 | 1,594 | NONE | null | # ๐ Bug
## Information
In this problem, I am using the pre-trained **distillbert** model embedding to build a custom model (See the code snippet below). Everything works perfectly fine except saving the model (See error below). I am using the latest version of the transformer, which is 3.0.0. I could not even save the same model when using the last version 2.11 (see this issue: [https://github.com/huggingface/transformers/issues/4444](https://github.com/huggingface/transformers/issues/4444)).
I was just wondering if you could help me solve the problem.
## Code
```
config = DistilBertConfig.from_pretrained( 'distilbert-base-uncased')
config.output_hidden_states = False
distillbert_main = TFDistilBertMainLayer(config = config)
input_word_ids = tf.keras.layers.Input(shape=(8,), dtype = tf.int32, name = "input_word_ids"),
x = distillbert_main(input_word_ids)[0]
x = tf.keras.layers.Lambda(lambda seq: seq[:, 0, :])(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Dropout(0.2)(x)
out = tf.keras.layers.Dense(2)(x)
model = tf.keras.Model(inputs=input_word_ids, outputs=out)
for layer in model.layers[:3]:
layer.trainable = False
model.summary() # Works fine
model.get_config() # Works fine
model.save('./model.h5') # Does not work and produce error
```
## Error
```
TypeError Traceback (most recent call last)
<ipython-input-32-1fbe6dabead0> in <module>
----> 1 model.save('./model.h5')
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py in save(self, filepath, overwrite, include_optimizer, save_format, signatures, options)
1050 """
1051 save.save_model(self, filepath, overwrite, include_optimizer, save_format,
-> 1052 signatures, options)
1053
1054 def save_weights(self, filepath, overwrite=True, save_format=None):
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options)
133 'or using `save_weights`.')
134 hdf5_format.save_model_to_hdf5(
--> 135 model, filepath, overwrite, include_optimizer)
136 else:
137 saved_model_save.save(model, filepath, overwrite, include_optimizer,
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py in save_model_to_hdf5(model, filepath, overwrite, include_optimizer)
111 if isinstance(v, (dict, list, tuple)):
112 f.attrs[k] = json.dumps(
--> 113 v, default=serialization.get_json_type).encode('utf8')
114 else:
115 f.attrs[k] = v
/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
236 check_circular=check_circular, allow_nan=allow_nan, indent=indent,
237 separators=separators, default=default, sort_keys=sort_keys,
--> 238 **kw).encode(obj)
239
240
/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/encoder.py in encode(self, o)
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/json/encoder.py in iterencode(self, o, _one_shot)
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
258
259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
/usr/local/lib/python3.7/site-packages/tensorflow/python/util/serialization.py in get_json_type(obj)
74 return obj.__wrapped__
75
---> 76 raise TypeError('Not JSON Serializable:', obj)
TypeError: ('Not JSON Serializable:', DistilBertConfig {
"activation": "gelu",
"architectures": [
"DistilBertForMaskedLM"
],
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"vocab_size": 30522
}
)
```
- `transformers` version: 3.0.0
- Platform: Mac OSX
- Python version: 3.7
- PyTorch version (GPU?): No
- Tensorflow version: 2.2.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: NO
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5459/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/5459/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5458 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5458/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5458/comments | https://api.github.com/repos/huggingface/transformers/issues/5458/events | https://github.com/huggingface/transformers/issues/5458 | 649,638,254 | MDU6SXNzdWU2NDk2MzgyNTQ= | 5,458 | ๐ Can't use `AutoTokenizer` with `sshleifer/mbart-large-cc25` | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Ha, I didn't know the Tokenizer between Bart and mBart is different. I just noticed there is a class `MBartTokenizer`.\r\n\r\nIt seems like this class is not documented on [HuggingFace documentation](https://huggingface.co/transformers/model_doc/bart.html). Maybe we should consider adding it ?\r\n\r\n_Also the model card for `sshleifer/mbart-large-cc25` may need an update_\r\n\r\n---\r\n\r\nAlso, the following is still not working :\r\n\r\n```python\r\nfrom transformers import MBartTokenizer\r\n\r\ntokenizer = MBartTokenizer.from_pretrained(\"sshleifer/mbart-large-cc25\")\r\n```\r\n\r\nOnly `tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-en-ro')` seems to work.\r\n\r\n**Can I use Tokenizer from checkpoint `facebook/mbart-large-en-ro` for the model `sshleifer/mbart-large-cc25` ?**",
"mbart-large-cc25 does not work well yet, the PR is still open #3513.\r\nNonetheless these are all things I should fix, thanks!",
"For your second question, no. At the moment that tokenizer will not work well. Do you have fairseq cc25 working well?\r\n\r\n**Update:** just moved it to `facebook/mbart-large-cc25`. AutoTokenizer should work. ",
"I didn't try the fairseq model, went directly for HF implementation ^^"
] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | # ๐ Bug
From [`sshleifer/mbart-large-cc25`](https://huggingface.co/sshleifer/mbart-large-cc25) :
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sshleifer/mbart-large-cc25")
```
---
Running this code yield an error :
>OSError: Model name 'sshleifer/mbart-large-cc25' was not found in tokenizers model name list (facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum). We assumed 'sshleifer/mbart-large-cc25' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
---
Which Tokenizer should I use with this model ?
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5458/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5457 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5457/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5457/comments | https://api.github.com/repos/huggingface/transformers/issues/5457/events | https://github.com/huggingface/transformers/pull/5457 | 649,618,888 | MDExOlB1bGxSZXF1ZXN0NDQzMjYwNDAx | 5,457 | [Bart] enable test_torchscript, update test_tie_weights | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5457?src=pr&el=h1) Report\n> Merging [#5457](https://codecov.io/gh/huggingface/transformers/pull/5457?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/306f1a269504b781f886d75105acabf8ae95bd11&el=desc) will **decrease** coverage by `0.28%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5457?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5457 +/- ##\n==========================================\n- Coverage 77.86% 77.57% -0.29% \n==========================================\n Files 141 141 \n Lines 24608 24608 \n==========================================\n- Hits 19160 19089 -71 \n- Misses 5448 5519 +71 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5457?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `32.43% <0.00%> (-55.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.18% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.37% <0.00%> (+25.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5457?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5457?src=pr&el=footer). Last update [306f1a2...e08f8b7](https://codecov.io/gh/huggingface/transformers/pull/5457?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | This sets `test_torchscript=True` for BART and removes unneeded asserts in `test_tie_weights`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5457/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5457",
"html_url": "https://github.com/huggingface/transformers/pull/5457",
"diff_url": "https://github.com/huggingface/transformers/pull/5457.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5457.patch",
"merged_at": 1594130809000
} |
https://api.github.com/repos/huggingface/transformers/issues/5456 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5456/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5456/comments | https://api.github.com/repos/huggingface/transformers/issues/5456/events | https://github.com/huggingface/transformers/pull/5456 | 649,581,040 | MDExOlB1bGxSZXF1ZXN0NDQzMjI2MjY1 | 5,456 | Add description of required special symbols | {
"login": "chrisliu298",
"id": 59010212,
"node_id": "MDQ6VXNlcjU5MDEwMjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/59010212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisliu298",
"html_url": "https://github.com/chrisliu298",
"followers_url": "https://api.github.com/users/chrisliu298/followers",
"following_url": "https://api.github.com/users/chrisliu298/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisliu298/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisliu298/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisliu298/subscriptions",
"organizations_url": "https://api.github.com/users/chrisliu298/orgs",
"repos_url": "https://api.github.com/users/chrisliu298/repos",
"events_url": "https://api.github.com/users/chrisliu298/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisliu298/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5456?src=pr&el=h1) Report\n> Merging [#5456](https://codecov.io/gh/huggingface/transformers/pull/5456?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/306f1a269504b781f886d75105acabf8ae95bd11&el=desc) will **decrease** coverage by `1.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5456?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5456 +/- ##\n==========================================\n- Coverage 77.86% 76.80% -1.06% \n==========================================\n Files 141 141 \n Lines 24608 24608 \n==========================================\n- Hits 19160 18901 -259 \n- Misses 5448 5707 +259 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5456?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `70.76% <0.00%> (-13.08%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.15% <0.00%> (-6.29%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.89% <0.00%> (-1.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.37% <0.00%> (+25.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5456?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5456?src=pr&el=footer). Last update [306f1a2...8ce649f](https://codecov.io/gh/huggingface/transformers/pull/5456?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5456/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5456",
"html_url": "https://github.com/huggingface/transformers/pull/5456",
"diff_url": "https://github.com/huggingface/transformers/pull/5456.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5456.patch",
"merged_at": 1593778588000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5455 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5455/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5455/comments | https://api.github.com/repos/huggingface/transformers/issues/5455/events | https://github.com/huggingface/transformers/issues/5455 | 649,552,028 | MDU6SXNzdWU2NDk1NTIwMjg= | 5,455 | How to batch encode sentences using BertTokenizer? | {
"login": "RayLei",
"id": 1709968,
"node_id": "MDQ6VXNlcjE3MDk5Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1709968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RayLei",
"html_url": "https://github.com/RayLei",
"followers_url": "https://api.github.com/users/RayLei/followers",
"following_url": "https://api.github.com/users/RayLei/following{/other_user}",
"gists_url": "https://api.github.com/users/RayLei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RayLei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RayLei/subscriptions",
"organizations_url": "https://api.github.com/users/RayLei/orgs",
"repos_url": "https://api.github.com/users/RayLei/repos",
"events_url": "https://api.github.com/users/RayLei/events{/privacy}",
"received_events_url": "https://api.github.com/users/RayLei/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Hi @RayLei Have a look at this https://huggingface.co/transformers/preprocessing.html"
] | 1,593 | 1,595 | 1,595 | NONE | null | # โ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I would like to create a minibatch by encoding multiple sentences using transformers.BertTokenizer. How can I do it? I tried following code.
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
tokenizer.encode('this is the first sentence')
>>> [2023, 2003, 1996, 2034, 6251]
tokenizer.encode(['this is the first sentence', 'another setence'])
>>> [100, 100] # expecting 7 tokens
```
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
https://stackoverflow.com/questions/62669261/how-to-encode-multiple-setence-using-transformers-berttokenizer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5455/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5455/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5454 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5454/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5454/comments | https://api.github.com/repos/huggingface/transformers/issues/5454/events | https://github.com/huggingface/transformers/issues/5454 | 649,453,705 | MDU6SXNzdWU2NDk0NTM3MDU= | 5,454 | Error while saving Longformer pre-trained model | {
"login": "danishpruthi",
"id": 4627113,
"node_id": "MDQ6VXNlcjQ2MjcxMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4627113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danishpruthi",
"html_url": "https://github.com/danishpruthi",
"followers_url": "https://api.github.com/users/danishpruthi/followers",
"following_url": "https://api.github.com/users/danishpruthi/following{/other_user}",
"gists_url": "https://api.github.com/users/danishpruthi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danishpruthi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danishpruthi/subscriptions",
"organizations_url": "https://api.github.com/users/danishpruthi/orgs",
"repos_url": "https://api.github.com/users/danishpruthi/repos",
"events_url": "https://api.github.com/users/danishpruthi/events{/privacy}",
"received_events_url": "https://api.github.com/users/danishpruthi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"+1, I got the same error.",
"Hi, do you mind pasting your environment information? Especially related to your transformers and tokenizers versions.",
"Hi @LysandreJik, thanks for checking in. I am using the version 2.11.0 of the transformers library, and tokenizers==0.7.0. \r\n\r\nFollowing is the associated [config file](https://s3.amazonaws.com/models.huggingface.co/bert/allenai/longformer-large-4096-finetuned-triviaqa/config.json). It doesn't say much about the tokenizer version, but I think the tokenizers are too loaded from `LongformerForQuestionAnswering.from_pretrained(\"allenai/longformer-large-4096-finetuned-triviaqa\")`\r\n\r\n```\r\n{\r\n \"architectures\": [\r\n \"LongformerForQuestionAnswering\"\r\n ],\r\n \"attention_mode\": \"longformer\",\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"attention_window\": [\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512,\r\n 512\r\n ],\r\n \"bos_token_id\": 0,\r\n \"eos_token_id\": 2,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 1024,\r\n \"ignore_attention_mask\": false,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 4096,\r\n \"layer_norm_eps\": 1e-05,\r\n \"max_position_embeddings\": 4098,\r\n \"model_type\": \"longformer\",\r\n \"num_attention_heads\": 16,\r\n \"num_hidden_layers\": 24,\r\n \"pad_token_id\": 1,\r\n \"sep_token_id\": 2,\r\n \"type_vocab_size\": 1,\r\n \"vocab_size\": 50265\r\n}\r\n```",
"A simple way to reproduce the problem is the following:\r\n\r\n```python\r\nimport transformers\r\nfrom transformers import * \r\ntokenizer = LongformerTokenizer.from_pretrained(\"allenai/longformer-base-4096\")\r\ntokenizer.save_pretrained(\"~/\")\r\n```",
"I think I found out where the problem lies:\r\n \r\n```python\r\ntokenizer.special_tokens_map_extended.items()\r\n```\r\nThere are these special tokens which are instances of `AddedToken` which do not have a `__getstate__` function which is called in line 1368 of `tokenization_utils_base.py`\r\n\r\n` \r\ndict_items([('bos_token', AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False)), ('eos_token', AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False)), ('unk_token', AddedToken(\"<unk>\", rstrip=False, lstrip=False, single_word=False)), ('sep_token', AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False)), ('pad_token', AddedToken(\"<pad>\", rstrip=False, lstrip=False, single_word=False)), ('cls_token', AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False)), ('mask_token', AddedToken(\"<mask>\", rstrip=False, lstrip=True, single_word=False))])\r\n`",
"Hmmm, I can't reproduce on my end with your versions. Three questions:\r\n\r\n- Did you install from source? If you did, it's possible that you have some tokenizer changes that were intended for version 3.0.0. In that case, could you try installing tokenizers==0.8.0, that has the necessary changes to handle that?\r\n- Is it possible for you to reinstall both transformers and tokenizers to check? `pip install -U transformers==2.11.0` and `pip install -U tokenizers==0.8.0`\r\n- **If all else fails, is it a possibility for you to install the latest versions? A simple `pip install -U transformers` should take care of it.**\r\n\r\nLet me know if any of these fix your issue.",
"Actually, I never pip-installed the tranformers library, I am just running the cloned github code from a few days ago (this is because I had to edit some parts of the code for my use case).\r\n\r\nHowever, when I pip installed these versions, surprisingly, I don't see this error. As you suggest, it is possible that some tokenizer changes that were intended for version 3.0.0 crept in.\r\n\r\nIn the cloned code that I am using, if I change the following line to:\r\n\r\nhttps://github.com/huggingface/transformers/blob/ef0e9d806c51059b07b98cb0279a20d3ba3cbc1d/src/transformers/tokenization_utils_base.py#L1368\r\n\r\n```python\r\nwrite_dict[key] = value.content # instead of __getstate__()\r\n```\r\n\r\nThe problem is fixed. \r\n\r\n\r\n\r\n\r\n\r\n",
"> Actually, I never pip-installed the tranformers library, I am just running the cloned github code from a few days ago (this is because I had to edit some parts of the code for my use case).\r\n> \r\n> However, when I pip installed these versions, surprisingly, I don't see this error. As you suggest, it is possible that some tokenizer changes that were intended for version 3.0.0 crept in.\r\n> \r\n> In the cloned code that I am using, if I change the following line to:\r\n> \r\n> https://github.com/huggingface/transformers/blob/ef0e9d806c51059b07b98cb0279a20d3ba3cbc1d/src/transformers/tokenization_utils_base.py#L1368\r\n> \r\n> ```python\r\n> write_dict[key] = value.content # instead of __getstate__()\r\n> ```\r\n> \r\n> The problem is fixed.\r\n\r\nJust had the same issue with version 3.0.2 while fine-tuning the Robert-base model. Guess, it would have been the same with other BERT-base models.\r\nChanging this line solved the issue.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"`pip install -U tokenizers==0.8.0` solved this!!!!"
] | 1,593 | 1,609 | 1,600 | NONE | null | Thanks for the transformers library!
## Information
I am trying to finetune a pre-trained model of type `LongformerForQuestionAnswer` on a custom QA dataset using a custom script morphed from `run_squad.py`. The pre-trained model is `allenai/longformer-large-4096-finetuned-triviaqa`
While saving the pretrained model, I run into the following error:
```
Traceback (most recent call last):
File "examples/question-answering/run_nq.py", line 809, in <module>
main()
File "examples/question-answering/run_nq.py", line 752, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "examples/question-answering/run_nq.py", line 248, in train
tokenizer.save_pretrained(output_dir)
File "/home/danishp/git/explain-qa/src/third_party/transformers/src/transformers/tokenization_utils_base.py", line 1368, in save_pretrained
write_dict[key] = value.__getstate__()
AttributeError: 'AddedToken' object has no attribute '__getstate__'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5454/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5453 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5453/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5453/comments | https://api.github.com/repos/huggingface/transformers/issues/5453/events | https://github.com/huggingface/transformers/issues/5453 | 649,432,908 | MDU6SXNzdWU2NDk0MzI5MDg= | 5,453 | The output to be used for getting sentence embeddings from BERT | {
"login": "AkshitaJha",
"id": 8939340,
"node_id": "MDQ6VXNlcjg5MzkzNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8939340?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AkshitaJha",
"html_url": "https://github.com/AkshitaJha",
"followers_url": "https://api.github.com/users/AkshitaJha/followers",
"following_url": "https://api.github.com/users/AkshitaJha/following{/other_user}",
"gists_url": "https://api.github.com/users/AkshitaJha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AkshitaJha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AkshitaJha/subscriptions",
"organizations_url": "https://api.github.com/users/AkshitaJha/orgs",
"repos_url": "https://api.github.com/users/AkshitaJha/repos",
"events_url": "https://api.github.com/users/AkshitaJha/events{/privacy}",
"received_events_url": "https://api.github.com/users/AkshitaJha/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @AkshitaJha , what is your downstream task ? \r\nAlso you may wanna try this out for sentence embeddings \r\nhttps://huggingface.co/deepset/sentence_bert\r\nhttps://github.com/UKPLab/sentence-transformers\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | What is the output that we should be using to get embeddings for a sentence using BERT? When I load the pre-trained BERT model ([BertModel](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel)) from huggingface for inference, should I be using the `pooler_output', the output of the last hidden layer or something else?
While fine-tuning BERT, which huggingface module should be used for getting sentence embeddings? Is it the [BertForSequenceClassification](https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification), [BertForMaskedLM](https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm), [BertModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel), or some other module? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5453/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5453/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5452 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5452/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5452/comments | https://api.github.com/repos/huggingface/transformers/issues/5452/events | https://github.com/huggingface/transformers/issues/5452 | 649,432,403 | MDU6SXNzdWU2NDk0MzI0MDM= | 5,452 | Text Classification with PyTorch Lightning: 'dict' object has no attribute 'task' | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] | closed | false | null | [] | [
"You could manually cast it to a namespace with \r\n```python\r\nargparse.Namespace(**ckpt[\"hparams\"])\r\n```\r\n\r\nBut @williamFalcon may have a cleaner solution\r\n",
"I added it with a very *very* dirty fix, in GLUETransformer init added this to avoid cast it to Namespace if it was a dict\r\n\r\n`if type(hparams) is dict:\r\n hparams = Namespace(**hparams) `",
"The official way to do this is to call `self.save_hyperparameters(hparams)` in the constructor of the module - then the hyperparameters will be accessible through `self.hparams['some_param']` and `self.hparams.some_param` as well.",
"@nagyrajmund Hey, but that looks like it does not solve the issue. Even without save_hyperparameters() call, it will save the hparams in the checkpoint and the yaml file.",
"Hey-hey,\r\n\r\nI think you misunderstood me, my proposed fix is to replace [this line](https://github.com/huggingface/transformers/blob/33d7506ea10ca92886fd1bb3b5306a1a720c58fe/examples/lightning_base.py#L59) with `self.save_hyperparameters(hparams)`. Then the hparams will be loaded correctly from the checkpoint without changing any other functionality in the module. Let me know if you run into any issues :)",
"@nateraw @borda ",
"the conclusion after sharing min exmple is missing `self.save_hyperparameters()` in init\r\nhttps://pytorch-lightning.slack.com/archives/CRBLFHY79/p1595502354412700",
"*EDIT: Does not work as intended, please check the other comments*\r\n\r\n> @nagyrajmund Hey, but that looks like it does not solve the issue. Even without save_hyperparameters() call, it will save the hparams in the checkpoint and the yaml file.\r\n\r\nIt does work, i think as @Borda mentioned the example is missing that. Among, `gpus` parameter and `load_datasets()` functions were the issues. ",
"@bhashithe mind share the code or is it this example? transformers/examples/text-classification/run_pl_glue.py",
"*EDIT: Does not work as intended, please check the other comments*\r\n@Borda It is actually the example, but i had to alter both lightning_base.py and run_pl_glue.py to get it to work.",
"would you mind sending a PR with your fix @bhashithe ?",
"No problem, let me send that now.",
"Sorry @Borda that save_hyperparameters() fix does not work @nagyrajmund \r\n\r\nSmall oversight on my part, anyway i have it working by resetting hparams to be a Namespace().",
"Created #6027 with fixes.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,601 | 1,601 | COLLABORATOR | null | Hi,
after manually resolving the `n_gpu` attribute issue in `lightning_base.py` (see #5385), I found another strange behaviour in the Text Classification example.
I used PL in version *0.8.1* with the `run_pl.sh` script. Training works, but after reloading the model for evaluation, the following error message is thrown:
```bash
Traceback (most recent call last):
File "run_pl_glue.py", line 189, in <module>
model = model.load_from_checkpoint(checkpoints[-1])
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/saving.py", line 171, in load_from_checkpoint
model = cls._load_model_state(checkpoint, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/saving.py", line 201, in _load_model_state
model = cls(*args, **kwargs)
File "run_pl_glue.py", line 28, in __init__
hparams.glue_output_mode = glue_output_modes[hparams.task]
AttributeError: 'dict' object has no attribute 'task'
```
I did some debugging. So the interesting part is in the constructor:
https://github.com/huggingface/transformers/blob/306f1a269504b781f886d75105acabf8ae95bd11/examples/text-classification/run_pl_glue.py#L26-L30
For training (first initialization), the `hparams` variable outputs:
```python
Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', data_dir='./glue_data/MRPC/', do_predict=True, do_train=True, eval_batch_size=32, fast_dev_run=False, fp16=True, fp16_opt_level='O1', gpus=1, gradient_accumulation_steps=1, learning_rate=2e-05, max_grad_norm=1.0, max_seq_length=128, model_name_or_path='bert-base-cased', n_tpu_cores=0, num_train_epochs=1, num_workers=4, output_dir='/mnt/transformers-pl/examples/text-classification/mrpc-pl-bert', overwrite_cache=False, resume_from_checkpoint=None, seed=2, task='mrpc', tokenizer_name=None, train_batch_size=32, val_check_interval=1.0, warmup_steps=0, weight_decay=0.0)
```
Notice the type: it is a `Namespace`. After training... and re-loading the model checkpoint, `hparams` looks like:
```python
{'output_dir': '/mnt/transformers-pl/examples/text-classification/mrpc-pl-bert', 'fp16': True, 'fp16_opt_level': 'O1', 'fast_dev_run': False, 'gpus': 1, 'n_tpu_cores': 0, 'max_grad_norm': 1.0, 'do_train': True, 'do_predict': True, 'gradient_accumulation_steps': 1, 'seed': 2, 'resume_from_checkpoint': None, 'val_check_interval': 1.0, 'model_name_or_path': 'bert-base-cased', 'config_name': '', 'tokenizer_name': None, 'cache_dir': '', 'learning_rate': 2e-05, 'weight_decay': 0.0, 'adam_epsilon': 1e-08, 'warmup_steps': 0, 'num_workers': 4, 'num_train_epochs': 1, 'train_batch_size': 32, 'eval_batch_size': 32, 'max_seq_length': 128, 'task': 'mrpc', 'data_dir': './glue_data/MRPC/', 'overwrite_cache': False, 'glue_output_mode': 'classification'}
```
It's strange, because it is now a normal dictionary so `hparams.task` is not working ๐ข
@sshleifer could you help with that issue ๐ค | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5452/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5451 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5451/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5451/comments | https://api.github.com/repos/huggingface/transformers/issues/5451/events | https://github.com/huggingface/transformers/issues/5451 | 649,394,507 | MDU6SXNzdWU2NDkzOTQ1MDc= | 5,451 | TF: inputs vs input_ids | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Think it's required for some weird keras inner workings, or @LysandreJik ? I remember I had to change them to `inputs` in TF T5 at some point as well.",
"### Background for Keras inner workings:\r\n\r\n(taken from the docs)\r\n\r\nTF 2.0 models accepts two formats as inputs:\r\n\r\n- having all inputs as keyword arguments (like PyTorch models), or\r\n- having all inputs as a list, tuple or dict in the first positional arguments.\r\n\r\nIf you choose the second option, there are three possibilities you can use to gather all the input Tensors\r\nin the first positional argument :\r\n\r\n- a single Tensor with input_ids only and nothing else: `model(inputs_ids)`\r\n- a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:\r\n `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])`\r\n- a dictionary with one or several input Tensors associated to the input names given in the docstring:\r\n `model({'input_ids': input_ids, 'token_type_ids': token_type_ids})`\r\n\r\nThe first argument name is therefore more appropriate as `inputs` rather than `input_ids`, since it can contain all the inputs. This is why you can see such a snippet at the beginning of each transformer layer, in order to gather all inputs:\r\n\r\n```py\r\n if isinstance(inputs, (tuple, list)):\r\n input_ids = inputs[0]\r\n past = inputs[1] if len(inputs) > 1 else past\r\n attention_mask = inputs[2] if len(inputs) > 2 else attention_mask\r\n token_type_ids = inputs[3] if len(inputs) > 3 else token_type_ids\r\n [...]\r\n assert len(inputs) <= 10, \"Too many inputs.\"\r\n elif isinstance(inputs, (dict, BatchEncoding)):\r\n input_ids = inputs.get(\"input_ids\")\r\n past = inputs.get(\"past\", past)\r\n attention_mask = inputs.get(\"attention_mask\", attention_mask)\r\n token_type_ids = inputs.get(\"token_type_ids\", token_type_ids)\r\n [...]\r\n assert len(inputs) <= 10, \"Too many inputs.\"\r\n else:\r\n input_ids = inputs\r\n```\r\n\r\n### Actual reason why things are done this way in the tests:\r\n\r\nIt stems from that PR: https://github.com/huggingface/transformers/pull/3547.\r\n\r\nPreviously it was written in the T5 forward pass as `decoder_input_ids`, while it could be a dict and, therefore, contain everything. Looking at it now, I guess it could be put as `input_ids` too (since it's a positional argument, the naming doesn't really matter). ",
"T5 is supporting \r\n\r\n```python\r\ndef call(inputs, **kwargs):\r\n if isinstance(inputs, dict):\r\n kwargs.update(inputs)\r\n else:\r\n kwargs[\"inputs\"] = inputs\r\n\r\n # retrieve arguments\r\n inputs = kwargs.get(\"inputs\", None)\r\n\t\t...\r\n```\r\nI will try to kwarg everything, because to me this is an explosion of input types and boilerplate.",
"I see what you mean now @sshleifer! Yes you are right in T5 the name was wrong IMO. Fixing this now in a bigger TF refactor PR.",
"So as you said this line:\r\nhttps://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_common.py#L328\r\nshould be changed to just:\r\n```\r\ninput_ids = inputs_keywords.pop(\"input_ids\", None)\r\n```\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,598 | 1,598 | CONTRIBUTOR | null | Why should TF encoder_decoder models take inputs instead of input_ids ?
https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_common.py#L328
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5451/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5450 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5450/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5450/comments | https://api.github.com/repos/huggingface/transformers/issues/5450/events | https://github.com/huggingface/transformers/pull/5450 | 649,359,968 | MDExOlB1bGxSZXF1ZXN0NDQzMDIzMTUz | 5,450 | Add Reformer MLM notebook | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | MEMBER | null | adds a simple notebook on how to do MLM with Reformer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5450/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5450",
"html_url": "https://github.com/huggingface/transformers/pull/5450",
"diff_url": "https://github.com/huggingface/transformers/pull/5450.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5450.patch",
"merged_at": 1593642050000
} |
https://api.github.com/repos/huggingface/transformers/issues/5449 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5449/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5449/comments | https://api.github.com/repos/huggingface/transformers/issues/5449/events | https://github.com/huggingface/transformers/pull/5449 | 649,352,879 | MDExOlB1bGxSZXF1ZXN0NDQzMDE3MTI3 | 5,449 | Guide to fixed-length model perplexity evaluation | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5449?src=pr&el=h1) Report\n> Merging [#5449](https://codecov.io/gh/huggingface/transformers/pull/5449?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d16e36c7e525aab4c08a6e60a7478e209498dc14&el=desc) will **increase** coverage by `0.86%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5449?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5449 +/- ##\n==========================================\n+ Coverage 77.82% 78.68% +0.86% \n==========================================\n Files 141 141 \n Lines 24608 24608 \n==========================================\n+ Hits 19150 19364 +214 \n+ Misses 5458 5244 -214 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5449?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.10% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.68% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5449?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5449?src=pr&el=footer). Last update [d16e36c...b3dae20](https://codecov.io/gh/huggingface/transformers/pull/5449?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,598 | 1,594 | CONTRIBUTOR | null | This post / guide is inspired by this recent [Twitter discussion](https://twitter.com/myleott/status/1245840363262283776) and [this gist](https://gist.github.com/myleott/cdf685b8b3ce20b0221e1842782bce74) on the different ways that perplexity can be evaluated and the optimal strategy of a strided "sliding window".
Interested in feedback both on the guide/writing component as well as the theoretical discussion on PPL. Right now my understanding is that our language modeling script uses non-overlapping segments rather than the sliding window.
Relevant to #4415, #4219. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5449/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5449",
"html_url": "https://github.com/huggingface/transformers/pull/5449",
"diff_url": "https://github.com/huggingface/transformers/pull/5449.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5449.patch",
"merged_at": 1594159455000
} |
https://api.github.com/repos/huggingface/transformers/issues/5448 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5448/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5448/comments | https://api.github.com/repos/huggingface/transformers/issues/5448/events | https://github.com/huggingface/transformers/pull/5448 | 649,321,511 | MDExOlB1bGxSZXF1ZXN0NDQyOTkwNTIx | 5,448 | grammar corrections and train data update | {
"login": "DeepsMoseli",
"id": 29062994,
"node_id": "MDQ6VXNlcjI5MDYyOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/29062994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DeepsMoseli",
"html_url": "https://github.com/DeepsMoseli",
"followers_url": "https://api.github.com/users/DeepsMoseli/followers",
"following_url": "https://api.github.com/users/DeepsMoseli/following{/other_user}",
"gists_url": "https://api.github.com/users/DeepsMoseli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DeepsMoseli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DeepsMoseli/subscriptions",
"organizations_url": "https://api.github.com/users/DeepsMoseli/orgs",
"repos_url": "https://api.github.com/users/DeepsMoseli/repos",
"events_url": "https://api.github.com/users/DeepsMoseli/events{/privacy}",
"received_events_url": "https://api.github.com/users/DeepsMoseli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5448?src=pr&el=h1) Report\n> Merging [#5448](https://codecov.io/gh/huggingface/transformers/pull/5448?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d16e36c7e525aab4c08a6e60a7478e209498dc14&el=desc) will **decrease** coverage by `0.86%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5448?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5448 +/- ##\n==========================================\n- Coverage 77.82% 76.95% -0.87% \n==========================================\n Files 141 141 \n Lines 24608 24608 \n==========================================\n- Hits 19150 18938 -212 \n- Misses 5458 5670 +212 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5448?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5448?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5448?src=pr&el=footer). Last update [d16e36c...20f340c](https://codecov.io/gh/huggingface/transformers/pull/5448?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | - fixed grammar and spelling
- added an intro
- updated Training data references | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5448/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5448",
"html_url": "https://github.com/huggingface/transformers/pull/5448",
"diff_url": "https://github.com/huggingface/transformers/pull/5448.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5448.patch",
"merged_at": 1593779157000
} |
https://api.github.com/repos/huggingface/transformers/issues/5447 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5447/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5447/comments | https://api.github.com/repos/huggingface/transformers/issues/5447/events | https://github.com/huggingface/transformers/issues/5447 | 649,243,564 | MDU6SXNzdWU2NDkyNDM1NjQ= | 5,447 | Where did "prepare_for_model" go? What is the replacement? | {
"login": "ohmeow",
"id": 14000,
"node_id": "MDQ6VXNlcjE0MDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/14000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ohmeow",
"html_url": "https://github.com/ohmeow",
"followers_url": "https://api.github.com/users/ohmeow/followers",
"following_url": "https://api.github.com/users/ohmeow/following{/other_user}",
"gists_url": "https://api.github.com/users/ohmeow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ohmeow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ohmeow/subscriptions",
"organizations_url": "https://api.github.com/users/ohmeow/orgs",
"repos_url": "https://api.github.com/users/ohmeow/repos",
"events_url": "https://api.github.com/users/ohmeow/events{/privacy}",
"received_events_url": "https://api.github.com/users/ohmeow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Why were you using `tokenize` + `prepare_for_model` instead of `encode`/`encode_plus` ? Let's see how to fit your use-case with the best approach!",
"Sure.\r\n\r\nI'm the developer of [this library](https://ohmeow.github.io/blurr/) which integrates huggingface with fastai. Probably the best thing is to look at the code [here](https://github.com/ohmeow/blurr/blob/master/blurr/data/core.py) to see what I'm doing.\r\n\r\nThe fastai bits don't work all that well with inputs composed of multiple tensors, and so my initial fastai transform converts the text to ids, which are then wrapped in a tensor to make fastai happy. Before batches are created, I would use `prepare_for_model` to get the necessary transformer inputs (e.g. input_ids, attention_mask, etc...), pad to max length, etc..., using those ids.\r\n\r\n@sgugger may have some thoughts on how to adapt my code better to v3 given he wrote most of those pieces in fastai :)\r\n\r\nThanks!",
"@LysandreJik Would it be possible that you link this issue in the \"breaking changes\" section for the 3.0.0 release ๐ค \r\n\r\nIn Flair we had the same issue :)",
"Hmm, I see, let me investigate. Does using the private `_prepare_for_model` solve your issue? I'll ping @n1t0 as well as he might have more info on that front.\r\n\r\n@stefan-it, just did! Thank you.",
"Ok, indeed I'll add it as a breaking change also we could expose it publicly again in a 3.0.1 if it happens that many people were using it.\r\n\r\nThe main reason I made it private is that we don't have it in fast tokenizers (though we could maybe work on having it) and I'm trying to have both APIs come closer to each others.\r\n\r\n@n1t0 do you think we could provide an implementation of this method in Fast tokenizers? It's basically all the post-processing (truncation + merging pairs + padding) after the conversion in integer indices.",
"We should be able to expose the post-processing for both `List[str]` and `List[int]` in `tokenizers`, but I'll have to check. I think the only problem in doing so is that all the mappings (chars <=> tokens <=> words) and offsets won't make any sense in this case.",
"Ok, we will release a patch to fix this breaking change (re-expose `prepare_for_models` for both slow and fast tokenizers with backward-compatible API) plus the one mentioned in #5377 probably tomorrow or early next week."
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | I'm working with already numericalized data (e.g., where the text has been converted to ids via `tokenizer.tokenize()`) and was using `prepare_for_model` to build the appropriate input dictionary ... ***but*** that method is gone in 3.0.
So ... what should I use/do now?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5447/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5446 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5446/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5446/comments | https://api.github.com/repos/huggingface/transformers/issues/5446/events | https://github.com/huggingface/transformers/issues/5446 | 649,212,031 | MDU6SXNzdWU2NDkyMTIwMzE= | 5,446 | Reformer language modeling using run_language_modeling.py: sentences didn't pad to max_length | {
"login": "qwu01",
"id": 45884870,
"node_id": "MDQ6VXNlcjQ1ODg0ODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/45884870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qwu01",
"html_url": "https://github.com/qwu01",
"followers_url": "https://api.github.com/users/qwu01/followers",
"following_url": "https://api.github.com/users/qwu01/following{/other_user}",
"gists_url": "https://api.github.com/users/qwu01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qwu01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qwu01/subscriptions",
"organizations_url": "https://api.github.com/users/qwu01/orgs",
"repos_url": "https://api.github.com/users/qwu01/repos",
"events_url": "https://api.github.com/users/qwu01/events{/privacy}",
"received_events_url": "https://api.github.com/users/qwu01/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Can you maybe just use the script provided here: https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb? In Reformer you have to be careful with the length you use for training. The docs can be helpful here as well: https://huggingface.co/transformers/model_doc/reformer.html",
"Thank you, this notebook is helpful. This script uses Crime and Punish as one document and padded to 2**19.\r\nSo basically if I want to train a Reformer language model on a line-by-line text dataset (e.g. wikitext2), I'll need to write code to manually pad sequences instead of using run-language-modeling.py? ",
"You have to make sure that your reformer config is correctly set up (especially the axial position encodings) according to the docs and the length of your data"
] | 1,593 | 1,593 | 1,593 | NONE | null | # ๐ Bug
## Information
Model I am using (Bert, XLNet ...): Reformer
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
- using run_language_modeling.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
- language modeling using wikitext2
## To reproduce
Steps to reproduce the behavior:
1. Using wikitext2
2. run
`
python run_language_modeling.py \
--output_dir=output \
--model_type=reformer \
--config_name=google/reformer-crime-and-punishment \
--tokenizer_name=google/reformer-crime-and-punishment \
--line_by_line \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE
`
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Should start training, but got ValueError:
ValueError: If training, sequence Length 444 has to be a multiple of least common multiple chunk_length 64. Please consider padding the input to a length of 448.
Seems tokenizer didn't pad to max_length in LineByLineTextDataset (https://github.com/huggingface/transformers/blob/f4323dbf8c29952b1ae55b979120969a9aeb730e/src/transformers/data/datasets/language_modeling.py#L78)?
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: <fill in> Y
- Using distributed or parallel set-up in script?: <fill in> N
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5446/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5445 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5445/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5445/comments | https://api.github.com/repos/huggingface/transformers/issues/5445/events | https://github.com/huggingface/transformers/issues/5445 | 649,185,662 | MDU6SXNzdWU2NDkxODU2NjI= | 5,445 | "Write With Transformer" inserts a space whenever accepting a suggestion, even if a space doesn't belong there | {
"login": "flarn2006",
"id": 687313,
"node_id": "MDQ6VXNlcjY4NzMxMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/687313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flarn2006",
"html_url": "https://github.com/flarn2006",
"followers_url": "https://api.github.com/users/flarn2006/followers",
"following_url": "https://api.github.com/users/flarn2006/following{/other_user}",
"gists_url": "https://api.github.com/users/flarn2006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flarn2006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flarn2006/subscriptions",
"organizations_url": "https://api.github.com/users/flarn2006/orgs",
"repos_url": "https://api.github.com/users/flarn2006/repos",
"events_url": "https://api.github.com/users/flarn2006/events{/privacy}",
"received_events_url": "https://api.github.com/users/flarn2006/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue still occurs. How do I reopen?"
] | 1,593 | 1,615 | 1,599 | NONE | null | # ๐ Bug
## Information
Model I am using (Bert, XLNet ...): GPT-2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X] the official example scripts: Write With Transformer
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: Write With Transformer
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Type something ending in the middle of a word. (e.g. `See how a modern neural netw`) Alternatively, type something ending with an open parenthesis or quotation mark, such as `Donald Trump tweeted "`.
2. Press Tab and accept a suggestion.
3. Observe how a space is added prior to the accepted text, despite a space not belonging there.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I'm not sure about the other models, but I know GPT-2 does start continuations with a space when appropriate. So it should be possible to have it only add a space when a space is desirable. Otherwise, I think it would be better to have it not add a space at all, as it's easier to manually add a space before pushing Tab (if one is desired) than it is to go back and delete undesired spaces every time they're generated.
## Environment info
N/A | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5445/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5444 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5444/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5444/comments | https://api.github.com/repos/huggingface/transformers/issues/5444/events | https://github.com/huggingface/transformers/issues/5444 | 649,164,751 | MDU6SXNzdWU2NDkxNjQ3NTE= | 5,444 | Inconsistent tokenizer handling of max_len | {
"login": "johncookds",
"id": 16158793,
"node_id": "MDQ6VXNlcjE2MTU4Nzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/16158793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johncookds",
"html_url": "https://github.com/johncookds",
"followers_url": "https://api.github.com/users/johncookds/followers",
"following_url": "https://api.github.com/users/johncookds/following{/other_user}",
"gists_url": "https://api.github.com/users/johncookds/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johncookds/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johncookds/subscriptions",
"organizations_url": "https://api.github.com/users/johncookds/orgs",
"repos_url": "https://api.github.com/users/johncookds/repos",
"events_url": "https://api.github.com/users/johncookds/events{/privacy}",
"received_events_url": "https://api.github.com/users/johncookds/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, similarly to the `good_tokenizer` where you enabled truncation, you should enable it for the `bad_tokenizer`:\r\n\r\n```py\r\nfrom transformers import RobertaTokenizerFast\r\n\r\nactually_very_good_tokenizer = RobertaTokenizerFast.from_pretrained(\"./BERT-Esperanto/\", max_len=512)\r\n\r\ntxt = \"Mi estas Julien.\" * 1000\r\n\r\nprint(\r\n len(actually_very_good_tokenizer.encode(txt, truncation=True))\r\n)\r\n\r\n# 512\r\n```\r\n\r\nYou can check the documentation [here](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.__call__).",
"Ah, gotcha my mistake, thanks for the quick response",
"Love the variable names here. (and the sample text) ๐คฃ"
] | 1,593 | 1,593 | 1,593 | NONE | null | Hi, it seems that at least the RobertaTokenizerFast is not actually truncating encodings to the max_len when encoding(the same issue occurs with the other encoding functions). The BPE tokenizer from tokenizers does.
Below the problem is shown based on the 'how to train from scratch' example.
```
from tokenizers.implementations import ByteLevelBPETokenizer
from tokenizers.processors import BertProcessing
from transformers import RobertaTokenizerFast
good_tokenizer = ByteLevelBPETokenizer(
"./BERT-Esperanto/vocab.json",
"./BERT-Esperanto/merges.txt",
)
good_tokenizer._tokenizer.post_processor = BertProcessing(
("</s>", good_tokenizer.token_to_id("</s>")),
("<s>", good_tokenizer.token_to_id("<s>")),
)
good_tokenizer.enable_truncation(max_length=512)
bad_tokenizer = RobertaTokenizerFast.from_pretrained("./BERT-Esperanto/", max_len=512)
txt = "Mi estas Julien." * 1000
print(
len(good_tokenizer.encode(txt).tokens),
len(bad_tokenizer.encode(txt))
)
# results:
512
5002
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5444/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5443 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5443/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5443/comments | https://api.github.com/repos/huggingface/transformers/issues/5443/events | https://github.com/huggingface/transformers/issues/5443 | 649,095,666 | MDU6SXNzdWU2NDkwOTU2NjY= | 5,443 | (TF) model.generate to tf.function for tf serving | {
"login": "gyin94",
"id": 67664443,
"node_id": "MDQ6VXNlcjY3NjY0NDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/67664443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gyin94",
"html_url": "https://github.com/gyin94",
"followers_url": "https://api.github.com/users/gyin94/followers",
"following_url": "https://api.github.com/users/gyin94/following{/other_user}",
"gists_url": "https://api.github.com/users/gyin94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gyin94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gyin94/subscriptions",
"organizations_url": "https://api.github.com/users/gyin94/orgs",
"repos_url": "https://api.github.com/users/gyin94/repos",
"events_url": "https://api.github.com/users/gyin94/events{/privacy}",
"received_events_url": "https://api.github.com/users/gyin94/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @gyin-ai, \r\n\r\nThanks a lot for the issue! Currently `generate` does not seem to be compatible with `tf.function`. I will open an issue about this and hopefully fix generate so that it will become possible to generate using `tf.function`. You're use case should definitely be possible in the future!\r\n\r\nI assume a lot of operations will have to changed in the tf generate function though, so this PR might take a while.",
"Will try to start on this next week: #5662",
"@patrickvonplaten perfect! Look forwarding to this feature so that we could use the LM model with various decoding solutions directly in the TF Serving or on-device. ",
"Yeah, it not going to that easy :D \r\nI will be on holiday for two weeks now, but we will be starting to put the focus much more on TF soon!\r\n\r\nAlso pinging @jplu here, just for notification.",
"Hey @gyin-ai \r\n\r\nI have done some work here https://github.com/huggingface/transformers/pull/5468 for making the models, saved model compliants. Can you try it to see if it might solve your issue?\r\n\r\n@patrickvonplaten has also started to do some great work on the LM part of TF, so yes @gyin-ai you can expect to have better TF compliancy soon ;)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> Will try to start on this next week: #5662\r\n\r\n@patrickvonplaten Has the problem been solved",
"@patrickvonplaten Any update on this? Seems `model.generate` is still not compatible with `tf.funtion`.",
"cc @Rocketknight1 ",
"Hi @yuwon, I'm (one of!) the current TF maintainers. We've experimented with wrapping all of `generate()` in a tf.function, but we generally find that buffers are not freed properly after each token is generated and OOM errors usually result after a few steps. `generate()` is important and so we're planning a complete investigation of this to see if there's any way we could make it work, but it's a sizeable project with a lot of other competing priorities and we don't have a concrete ETA right now.",
"@Rocketknight1 can you share you you've done on wrapping generate inside a tf function? It might be a start point for us to submit a PR and try to solve it. "
] | 1,593 | 1,650 | 1,600 | NONE | null | # โ Questions & Help
How can we wrap the model.generate and export it as a part of savedModel pb file? In this way, we can use beam search or topK during the tf serving or converting it to coremltools model.
## Details
I am trying to find a way to wrap the model in a Keras Model. But apparently model.generate is not tf.function supported. Like for loop is not supported in tf.function.
```
from transformers import *
class WrapModel(tf.keras.models.Model):
def __init__(self, transformer):
super(WrapModel, self).__init__()
self.transformer = transformer
@tf.function
def _internal_generate(self, inputs):
return self.transformer.generate(inputs, max_length=10, length_penalty=1.0, repetition_penalty=2.5,
early_stopping=True, num_beams=3)
def call(self, inputs, **kwargs):
print(inputs.shape)
res = self._internal_generate(inputs)
return res
gpt2_model = TFGPT2LMHeadModel.from_pretrained('distilgpt2')
w = WrapModel(gpt2_model)
input_layer = tf.keras.layers.Input(shape=10, dtype=tf.int32, name='input_ids')
prediction_model = w(input_layer)
tf_model = tf.keras.models.Model(inputs=input_layer, outputs=prediction_model)
import coremltools as ct
mlmodel = ct.convert(tf_model)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5443/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5442 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5442/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5442/comments | https://api.github.com/repos/huggingface/transformers/issues/5442/events | https://github.com/huggingface/transformers/pull/5442 | 649,073,784 | MDExOlB1bGxSZXF1ZXN0NDQyNzc1OTEx | 5,442 | [fix] Marian tests import | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5442?src=pr&el=h1) Report\n> Merging [#5442](https://codecov.io/gh/huggingface/transformers/pull/5442?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/13deb95a405bbd1037ad233c692d7fd1de9d31e3&el=desc) will **increase** coverage by `1.60%`.\n> The diff coverage is `66.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5442?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5442 +/- ##\n==========================================\n+ Coverage 76.22% 77.82% +1.60% \n==========================================\n Files 141 141 \n Lines 24420 24421 +1 \n==========================================\n+ Hits 18614 19006 +392 \n+ Misses 5806 5415 -391 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5442?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.80% <66.66%> (+0.05%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `73.37% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.21% <0.00%> (+0.82%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `89.11% <0.00%> (+1.02%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.18% <0.00%> (+5.02%)` | :arrow_up: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.18% <0.00%> (+9.85%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.14% <0.00%> (+29.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5442/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5442?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5442?src=pr&el=footer). Last update [43cb03a...3f31917](https://codecov.io/gh/huggingface/transformers/pull/5442?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5442/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5442",
"html_url": "https://github.com/huggingface/transformers/pull/5442",
"diff_url": "https://github.com/huggingface/transformers/pull/5442.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5442.patch",
"merged_at": 1593618143000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5441 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5441/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5441/comments | https://api.github.com/repos/huggingface/transformers/issues/5441/events | https://github.com/huggingface/transformers/issues/5441 | 649,064,445 | MDU6SXNzdWU2NDkwNjQ0NDU= | 5,441 | Benchmarking on TPU shows clearly wrong results | {
"login": "sslotin",
"id": 1344788,
"node_id": "MDQ6VXNlcjEzNDQ3ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1344788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sslotin",
"html_url": "https://github.com/sslotin",
"followers_url": "https://api.github.com/users/sslotin/followers",
"following_url": "https://api.github.com/users/sslotin/following{/other_user}",
"gists_url": "https://api.github.com/users/sslotin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sslotin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sslotin/subscriptions",
"organizations_url": "https://api.github.com/users/sslotin/orgs",
"repos_url": "https://api.github.com/users/sslotin/repos",
"events_url": "https://api.github.com/users/sslotin/events{/privacy}",
"received_events_url": "https://api.github.com/users/sslotin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"That looks like some solid batch parallelization :D Yeah these results don't look very accurate.\r\n\r\nTo be honest TPU Benchmarking is not very well tested yet and probably not very reliable, also partly because PyTorch/XLA is not very robust yet either. I will try to see if I can find the reason for this.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,600 | 1,600 | NONE | null | # ๐ Bug
## Information
I'm trying to benchmark performance of TPUs and the results don't make sense: they are the same for all batch sizes.
It was mentioned [in the pull request that added the feature](https://github.com/huggingface/transformers/pull/4850#issuecomment-640751636) but the PR was merged anyway.
## To reproduce
```
from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments
args = PyTorchBenchmarkArguments(
models=["bert-large-uncased"],
batch_sizes=[i * 1024 for i in range(2, 17)],
sequence_lengths=[16],
training=True,
no_memory=True
)
benchmark = PyTorchBenchmark(args)
results = benchmark.run()
```
Output:
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-large-uncased 2048 16 0.027
bert-large-uncased 3072 16 0.029
bert-large-uncased 4096 16 0.028
bert-large-uncased 5120 16 0.027
bert-large-uncased 6144 16 0.027
bert-large-uncased 7168 16 0.028
bert-large-uncased 8192 16 0.028
bert-large-uncased 9216 16 0.027
bert-large-uncased 10240 16 0.027
bert-large-uncased 11264 16 0.027
bert-large-uncased 12288 16 0.027
bert-large-uncased 13312 16 0.027
bert-large-uncased 14336 16 0.027
bert-large-uncased 15360 16 0.028
bert-large-uncased 16384 16 0.028
--------------------------------------------------------------------------------
TPU was used for inference. Note that the time after compilation stabilized (after ~10 inferences model.forward(..) calls) was measured.
==================== TRAIN - SPEED - RESULTS ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-large-uncased 2048 16 0.089
bert-large-uncased 3072 16 0.074
bert-large-uncased 4096 16 0.091
bert-large-uncased 5120 16 0.091
bert-large-uncased 6144 16 0.091
bert-large-uncased 7168 16 0.075
bert-large-uncased 8192 16 0.089
bert-large-uncased 9216 16 0.09
bert-large-uncased 10240 16 0.074
bert-large-uncased 11264 16 0.09
bert-large-uncased 12288 16 0.09
bert-large-uncased 13312 16 0.09
bert-large-uncased 14336 16 0.077
bert-large-uncased 15360 16 0.089
bert-large-uncased 16384 16 0.091
--------------------------------------------------------------------------------
TPU was used for training. Note that the time after compilation stabilized (after ~10 train loss=model.forward(...) + loss.backward() calls) was measured.
```
## Environment info
Running on GKE cluster, TPUv3-8, vanilla tpu-pytorch/xla:r1.5 image, XRT_TPU_CONFIG set
```
==================== ENVIRONMENT INFORMATION ====================
The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
- transformers_version: 3.0.0
- framework: PyTorch
- use_torchscript: False
- framework_version: 1.5.0a0+6d48871
- python_version: 3.6.10
- system: Linux
- cpu:
- architecture: 64bit
- date: 2020-07-01
- time: 14:37:15.608184
- fp16: False
- use_multiprocessing: False
- only_pretrain_model: False
- cpu_ram_mb: 30156
- use_gpu: False
- use_tpu: True
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5441/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5440 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5440/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5440/comments | https://api.github.com/repos/huggingface/transformers/issues/5440/events | https://github.com/huggingface/transformers/pull/5440 | 649,053,663 | MDExOlB1bGxSZXF1ZXN0NDQyNzU5MjQ0 | 5,440 | Fix dropdown bug in searches | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | COLLABORATOR | null | Version in the dropdown was getting a weird values during searches, this PR fixes it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5440/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5440",
"html_url": "https://github.com/huggingface/transformers/pull/5440",
"diff_url": "https://github.com/huggingface/transformers/pull/5440.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5440.patch",
"merged_at": 1593615780000
} |
https://api.github.com/repos/huggingface/transformers/issues/5439 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5439/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5439/comments | https://api.github.com/repos/huggingface/transformers/issues/5439/events | https://github.com/huggingface/transformers/pull/5439 | 649,052,138 | MDExOlB1bGxSZXF1ZXN0NDQyNzU4MDAy | 5,439 | Don't discard entity_group when token is the last in the sequence. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"LGTM!\r\n\r\nThanks @mfuntowicz !\r\n\r\nBefore:\r\n\r\n```bash\r\nIn [6]: nlp(\"My name is Wolfgang and I live in Berlin\") \r\nOut[6]: [{'entity_group': 'I-PER', 'score': 0.9991481900215149, 'word': 'Wolfgang'}]\r\n````\r\n\r\nWith this PR:\r\n\r\n```bash\r\nIn [5]: nlp(\"My name is Wolfgang and I live in Berlin\") \r\nOut[5]: \r\n[{'entity_group': 'I-PER', 'score': 0.9991481900215149, 'word': 'Wolfgang'},\r\n {'entity_group': 'I-LOC', 'score': 0.9983668327331543, 'word': 'Berlin'}]\r\n```",
"@LysandreJik CI error seems unrelated, is it ok for you if I merge?",
"Did you check this, @enzoampil? Just making sure to ping you as you contributed #3957 ๐ค",
"@julien-c Did a few checks as well and looks great! Was planning to include this in this PR #4987 (2nd point), but this seems to solve it cleanly already, so will consider this fix for that PR :smile:\r\n\r\nUPDATE: Ended up modifying this fix in the PR above, due to cases where the last token was repeating (for the test cases set in the above PR)."
] | 1,593 | 1,593 | 1,593 | MEMBER | null | Signed-off-by: Morgan Funtowicz <[email protected]> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5439/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5439",
"html_url": "https://github.com/huggingface/transformers/pull/5439",
"diff_url": "https://github.com/huggingface/transformers/pull/5439.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5439.patch",
"merged_at": 1593628243000
} |
https://api.github.com/repos/huggingface/transformers/issues/5438 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5438/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5438/comments | https://api.github.com/repos/huggingface/transformers/issues/5438/events | https://github.com/huggingface/transformers/pull/5438 | 649,009,287 | MDExOlB1bGxSZXF1ZXN0NDQyNzIyMzQ0 | 5,438 | Change model outputs types to self-document outputs | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The old occurrences of `isinstance(item, tuple)` can be replaced by `isinstance(item, tuple) or is_dataclass(item)` (to catch the return_tuple behavior) (`is_dataclass` comes from the dataclasses module).",
"General question (haven't dived deeply into this PR): \r\n\r\ndo we really want to maintain backward compatibility on this at \"all cost\"? Or should we migrate to a cleaner \"real\" NamedTuple or Dataclass output w/ a major version change?",
"> General question (haven't dived deeply into this PR):\r\n> \r\n> do we really want to maintain backward compatibility on this at \"all cost\"? Or should we migrate to a cleaner \"real\" NamedTuple or Dataclass output w/ a major version change?\r\n\r\nFWIW, one of the more frequent complaints I saw in the survey we just sent out is that we introduce breaking changes too often.",
"I think we want to maintain backwards compatibility with this at all cost, since not doing this would introduce a huge breaking change that will affect all users. And backwards compatibility doesn't seem too hard to keep, with @sgugger's approach.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5438?src=pr&el=h1) Report\n> Merging [#5438](https://codecov.io/gh/huggingface/transformers/pull/5438?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b2747af5434e5a5d8ab1d7e2789699d20d7a4ab8&el=desc) will **decrease** coverage by `0.12%`.\n> The diff coverage is `95.51%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5438?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5438 +/- ##\n==========================================\n- Coverage 77.94% 77.81% -0.13% \n==========================================\n Files 145 146 +1 \n Lines 25368 25939 +571 \n==========================================\n+ Hits 19773 20185 +412 \n- Misses 5595 5754 +159 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5438?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <0.00%> (-0.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100.00% <รธ> (รธ)` | |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <รธ> (รธ)` | |\n| [src/transformers/modeling\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `100.00% <รธ> (รธ)` | |\n| [src/transformers/modeling\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `24.10% <45.45%> (+1.99%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.71% <75.51%> (-1.95%)` | :arrow_down: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `87.70% <76.66%> (-0.61%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.90% <82.35%> (-1.25%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.65% <83.33%> (-0.80%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `79.16% <91.42%> (+0.14%)` | :arrow_up: |\n| ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/5438/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5438?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5438?src=pr&el=footer). Last update [b2747af...6b5f49b](https://codecov.io/gh/huggingface/transformers/pull/5438?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This is great! Very clean",
"Added a few TFBert models so tagging @jplu.\r\n\r\nThere seems to be issues coming with the compilation of models and I had to add a hacky shape property to `ModelOutput` to make some tests pass (one still mysteriously fails for electra). In general is changing the output type a bad idea for TF models or is it worth pursuing this?",
"This is a great work!!! Unfortunately changing the output type is a bad idea for TF models as you said :( in TF each output must be a tensor or a dict of tensors. Mostly for saved models, as simple example you can run this small script:\r\n\r\n```\r\nimport tensorflow as tf\r\nfrom transformers import TFBertModel, BertTokenizer, BertConfig\r\nmodel = TFBertModel.from_pretrained('bert-base-multilingual-uncased')\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-multilingual-uncased\")\r\nfeatures = tokenizer.encode_plus(\"Hello world.\", add_special_tokens=True, return_tensors=\"tf\")\r\nmodel._saved_model_inputs_spec = None\r\nmodel._set_save_spec(dict(features))\r\ntf.saved_model.save(model, \"save/test\")\r\n```\r\n\r\nYou will get:\r\n\r\n```\r\nTypeError: To be compatible with tf.contrib.eager.defun, Python functions must return zero or more Tensors; in compilation of <function trace_model_call.<locals>._wrapped_model at 0x7efd20582b90>, found return value of type <class 'transformers.modeling_tf_outputs.TFEncoderOutputWithPooling'>, which is not a Tensor.\r\n```\r\n\r\nI propose that you remove the TF parts and we will take the time later to check that together? Sorry :(",
"To fix the failing test pass, you can add the following code:\r\n\r\n```python \r\nif \"__name__\" not in frame.f_globals:\r\n return traceit\r\n```\r\n\r\nbefore this line: https://github.com/huggingface/transformers/blob/fa5423b1695cd24856bcff47214172e0f540d924/src/transformers/benchmark/benchmark_utils.py#L389\r\n\r\nI checked and the functionality is not broken because of it. It just means that for lines in which the code cannot find nested modules to trace it jumps out of the recursion directly, similar to what is coded here: https://github.com/huggingface/transformers/blob/fa5423b1695cd24856bcff47214172e0f540d924/src/transformers/benchmark/benchmark_utils.py#L391\r\n\r\nSo IMO, this is actually how the code should be written in benchmark tracing and not a dirty fix. Also cc @thomwolf here since he originally added the code.",
"FYI, I've listed followups that need to happen in [this project](https://github.com/huggingface/transformers/projects/20) (will tackle them but since I'm going off next week, want to be sure I don't forget anything ;-) ).",
"Very excited about this!"
] | 1,593 | 1,594 | 1,594 | COLLABORATOR | null | This PR addresses #5226 with no breaking changes. Instead of returning tuples, all PyTorch models now return a subclass of `ModelOutput` that is appropriate. Here is an example on a base model:
```
from transformers import BertTokenizer, BertForSequenceClassification
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(**inputs, labels=labels)
```
Then `outputs` will be an `SequenceClassifierOutput` object, which has the returned elements as attributes. The previous syntax
```
loss, logits = outputs[:2]
```
will still work, but you can also do
```
loss = outputs.loss
logits = outputs.logits
```
or also
```
loss = outputs["loss"]
logits = outputs["logits"]
```
Under the hood, `outputs` is a dataclass with optional fields that may be set to `None` (like `attentions` in our example). If you index by integer or by slice, the None fields are skipped (for backward-compatibility). If you try to access an attribute that's set to None by its key (for instance here `outputs["attentions"]`), it will return an error.
You can convert `outputs` to a regular tuple/dict with `outputs.to_tuple()` or `outputs.to_dict()`.
You can revert to the old behavior of having tuple by setting `return_tuple=True` in the config you pass to your model, or when you instantiate your model, or when you call your model on some inputs. If you're using `torchscript` (and the config you passed to your model has `config.torchscript = True`) this will automatically be the case (because jit only handles tuples as outputs).
A few other comments about the PR:
- The return part of the documentation of each model is now generated from the model output. It's done by the use of the decorators `@add_code_sample_docstrings` and when the example is inside the docstring, via the decorator `@replace_return_docstrings`. In the second case, we need to know where to put the return documentaiton, so there is an empty "Return:" that is use as placeholder.
- Two models were not tested (and had a bug): `XLMForTokenClassification` and `XLNetForQuestionAnsweringSimple`. This PR fixes that.
- The docstrings of seq2seq generative models like Bart or T5 were wrong as far as the return was concerned. This PR naturally fixes that.
- The argument `output_hidden_states` was omitted in all models forward methods, this PR adds it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5438/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5438",
"html_url": "https://github.com/huggingface/transformers/pull/5438",
"diff_url": "https://github.com/huggingface/transformers/pull/5438.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5438.patch",
"merged_at": 1594395414000
} |
https://api.github.com/repos/huggingface/transformers/issues/5437 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5437/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5437/comments | https://api.github.com/repos/huggingface/transformers/issues/5437/events | https://github.com/huggingface/transformers/issues/5437 | 648,936,194 | MDU6SXNzdWU2NDg5MzYxOTQ= | 5,437 | "Write With Transformer" not generating text (502 Bad Gateway) | {
"login": "flarn2006",
"id": 687313,
"node_id": "MDQ6VXNlcjY4NzMxMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/687313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flarn2006",
"html_url": "https://github.com/flarn2006",
"followers_url": "https://api.github.com/users/flarn2006/followers",
"following_url": "https://api.github.com/users/flarn2006/following{/other_user}",
"gists_url": "https://api.github.com/users/flarn2006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flarn2006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flarn2006/subscriptions",
"organizations_url": "https://api.github.com/users/flarn2006/orgs",
"repos_url": "https://api.github.com/users/flarn2006/repos",
"events_url": "https://api.github.com/users/flarn2006/events{/privacy}",
"received_events_url": "https://api.github.com/users/flarn2006/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik is rebooting the Raspberry Pi right now",
"Joking, I mean the Hugging Face data center",
"It's back up!",
"Thanks! It does work now, but it seems slower to respond and sometimes it times out. This is to the point where it's close to unusable. Do you happen to know whether this is on my end or yours?",
"Yes, there seems to be an issue. I'm fixing restarting the server.",
"So I guess that would be why I just started getting 502 Bad Gateway again? :)\r\nThanks for the help btw.",
"Everything should be back to normal now. Thanks for letting us know!",
"Yep, I was using it earlier and it appears so. Glad I could help!"
] | 1,593 | 1,593 | 1,593 | NONE | null | # ๐ Bug
## Information
Model I am using (Bert, XLNet ...): (Distil-)GPT2 on WriteWithTransformer
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X] the official example scripts: Write With Transformer
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: Not sure what the task is called but it's WWT
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Attempt to use the autocomplete on Write With Transformer
2. Notice it appears to be loading forever
3. Open the browser console, go to the Network tab, and try again
4. Observe the "502 Bad Gateway" error
```
HTTP/1.1 502 Bad Gateway
Server: nginx/1.14.2
Date: Wed, 01 Jul 2020 12:14:42 GMT
Content-Type: text/html
Content-Length: 173
Connection: keep-alive
X-JeanClaude: True
Access-Control-Allow-Headers: Content-Type
<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.14.2</center>
</body>
</html>
```
## Expected behavior
The autocomplete should appear as normal.
## Environment info
You'd know that better than I do.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5437/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5436 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5436/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5436/comments | https://api.github.com/repos/huggingface/transformers/issues/5436/events | https://github.com/huggingface/transformers/issues/5436 | 648,932,662 | MDU6SXNzdWU2NDg5MzI2NjI= | 5,436 | Squad2 processor error | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"You have to add a \"[CLS]\" token to Reformer tokenizer here to make the script work. The one tokenizer that is online for reformer: `tok = ReformerTokenizer.from_pretrained(\"google/reformer-crime-and-punishment\")` does not have a CLS token. If you add a new token via:\r\n\r\n`tok.add_special_tokens` \r\n\r\nthen you will also have to add a new weight to the word embedding of the models. \r\nAlternatively, you could also just set the cls token to some other token that exists\r\n\r\n```python\r\ntok.cls_token = tok.eos_token\r\n```",
"But overall - not sure at all whether fine-tuning the few pretrained reformer models that we have will work well for QA.",
"Thanks a lot, I forgot to check the tokens.\r\nI will train an QA model for testing purposes only.\r\nIf it's working correctly within my application I will train an MLM model on pg-19 dataset",
"The cls_eos token does not exist too\r\nNow the error message is \"ValueError: 2 is not in list\"\r\nDo you know which token exists ?",
"not really sure what you mean by `cls_eos token`. \r\nIf you use this tokenizer: `tok = ReformerTokenizer.from_pretrained(\"google/reformer-crime-and-punishment\")`, \r\na simple hack to make the tokenizer work is for example to set its <PAD> token as its <CLS> token:\r\n```python\r\ntok = ReformerTokenizer.from_pretrained(\"google/reformer-crime-and-punishment\")\r\ntok.cls_token = tok.pad_token\r\n\r\n# => now use this tokenizer in your script.\r\n```",
"Ups, sorry\r\nI meant\r\n<code>tok.cls_token = tok.eos_token</code>\r\nwas just in different things with my brain thinking and my hands typing \r\n\r\nEdit:\r\nIt`s working now, let`s see which results we get",
"Are you fine-tuning the reformer-crime-and-punish model? Would be very surprised if this gives good results :D But very keen for updates :-) ",
"At the moment I am playing with the hyperparameters.\r\nOf course, I will share my results with you.\r\nBut before I need to get the trio of my dataset, the nlp library and the trainingsscript work :D",
"I'm trying fine-tuning Reformer on Squad2 dataset from pre-trained model \"google/crime-and-punishment\". \r\nUsing tok.cls_token = tok.pad_token, I have the following error: \r\n\r\nSo I add tok.pad_token = tok.eos_token, but I have a new error: 2 is not in list.\r\ncan someone help me? Thank you\r\n",
"@FrancescoTroiano \r\nHi, have you fixed the issue?\r\nI have a same problem here \r\nI add \r\n```\r\ntokenizer.cls_token = tokenizer.pad_token\r\n```\r\nbut got ValueError: 50257 is not in list"
] | 1,593 | 1,667 | 1,593 | CONTRIBUTOR | null | # ๐ Bug
## Information
Model I am using (Bert, XLNet ...):
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
question answering
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
squadv2
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. install the branch by @patrickvonplaten which adds the reformer for QA in #5433
2. run the examples script with squadv2 option enabled and squadv2 dataset, downloaded from official website
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
<pre><code>
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/data/processors/squad.py", line 199, in squad_convert_example_to_features
cls_index = span["input_ids"].index(tokenizer.cls_token_id)
ValueError: None is not in list
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "run_squad.py", line 821, in <module>
main()
File "run_squad.py", line 763, in main
train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False)
File "run_squad.py", line 449, in load_and_cache_examples
features, dataset = squad_convert_examples_to_features(
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/data/processors/squad.py", line 330, in squad_convert_examples_to_features
features = list(
File "/home/a-ware/.local/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py", line 420, in <genexpr>
return (item for chunk in result for item in chunk)
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py", line 868, in next
raise value
ValueError: None is not in list
</code></pre>
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: linux
- Python version: 3.8
- PyTorch version (GPU?): 1.4
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5436/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5435 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5435/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5435/comments | https://api.github.com/repos/huggingface/transformers/issues/5435/events | https://github.com/huggingface/transformers/issues/5435 | 648,921,074 | MDU6SXNzdWU2NDg5MjEwNzQ= | 5,435 | I want to load pre-trained model from file instead of file name | {
"login": "August-us",
"id": 26326479,
"node_id": "MDQ6VXNlcjI2MzI2NDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/26326479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/August-us",
"html_url": "https://github.com/August-us",
"followers_url": "https://api.github.com/users/August-us/followers",
"following_url": "https://api.github.com/users/August-us/following{/other_user}",
"gists_url": "https://api.github.com/users/August-us/gists{/gist_id}",
"starred_url": "https://api.github.com/users/August-us/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/August-us/subscriptions",
"organizations_url": "https://api.github.com/users/August-us/orgs",
"repos_url": "https://api.github.com/users/August-us/repos",
"events_url": "https://api.github.com/users/August-us/events{/privacy}",
"received_events_url": "https://api.github.com/users/August-us/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | Thanks for your excellent code. I recently encountered such a probelm, I want to load pretrained model from anthor machine, and this server could not map the path to my code. But I could load this model in a buffer. So I want to use this buffer becoming the args. What should I do ๏ผ tks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5435/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5434 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5434/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5434/comments | https://api.github.com/repos/huggingface/transformers/issues/5434/events | https://github.com/huggingface/transformers/issues/5434 | 648,875,680 | MDU6SXNzdWU2NDg4NzU2ODA= | 5,434 | MiniLM transformers inconsistent log posteriors in multiple runs | {
"login": "sandhawalia",
"id": 10599550,
"node_id": "MDQ6VXNlcjEwNTk5NTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/10599550?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sandhawalia",
"html_url": "https://github.com/sandhawalia",
"followers_url": "https://api.github.com/users/sandhawalia/followers",
"following_url": "https://api.github.com/users/sandhawalia/following{/other_user}",
"gists_url": "https://api.github.com/users/sandhawalia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sandhawalia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sandhawalia/subscriptions",
"organizations_url": "https://api.github.com/users/sandhawalia/orgs",
"repos_url": "https://api.github.com/users/sandhawalia/repos",
"events_url": "https://api.github.com/users/sandhawalia/events{/privacy}",
"received_events_url": "https://api.github.com/users/sandhawalia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What was the issue?",
"MiniLM is not distilled with Masked LM task, only [Self-Attention distillation](https://github.com/huggingface/transformers/tree/master/model_cards/microsoft/MiniLM-L12-H384-uncased). It doesn't have LM head in the weights file. They are initialised randomly at each run ๐ค \r\n\r\n```\r\n{'missing_keys': ['cls.predictions.transform.dense.weight', 'cls.predictions.bias', 'cls.predictions.decoder.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'], 'unexpected_keys': [], 'error_msgs': []}\r\n```\r\n"
] | 1,593 | 1,593 | 1,593 | NONE | null | # ๐ Bug
## Information
**Describe the bug**
Using MiniLM for computing log likelihood of test sentences. Cross posted [here](https://github.com/microsoft/unilm/issues/196)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (attached below)
**To Reproduce**
Steps to reproduce the behavior:
1. pip install transformers==2.11.0, torch==1.5.0
2. Run the scripts pasted below, `hugging-face-bug-report.py`
3. Compare results across `gpt2`, `distilgpt2`, `microsoft/MiniLM-L12-H384-uncased`, `microsoft/DialoGPT-small`
**Expected behavior**
Log posteriors should not be different across multiple runs of the same model.
Example run with `gpt2` [consistent]
`python hugging-face-bug-report.py -m gpt2`
```
Starting gpt2 on cpu [if available]
-19.95 Hello, my dog is cute
-20.09 Hello, your dog is cute
-25.92 Nothing is what everything isn't
```
`python hugging-face-bug-report.py -m gpt2`
```
Starting gpt2 on cpu [if available]
-19.95 Hello, my dog is cute
-20.09 Hello, your dog is cute
-25.92 Nothing is what everything isn't
```
Example run with `microsoft/DialoGPT-small` [consistent]
`python hugging-face-bug-report.py -m microsoft/DialoGPT-small`
```
Starting microsoft/DialoGPT-small on cpu [if available]
-37.22 Hello, my dog is cute
-31.38 Hello, your dog is cute
-31.30 Nothing is what everything isn't
```
`python hugging-face-bug-report.py -m microsoft/DialoGPT-small`
```
Starting microsoft/DialoGPT-small on cpu [if available]
-37.22 Hello, my dog is cute
-31.38 Hello, your dog is cute
-31.30 Nothing is what everything isn't
```
**BUT** Example run with `microsoft/MiniLM-L12-H384-uncased` [**inconsistent**]
`python hugging-face-bug-report.py -m microsoft/MiniLM-L12-H384-uncased`
```
Starting microsoft/MiniLM-L12-H384-uncased on cpu [if available]
-82.84 Hello, my dog is cute
-81.92 Hello, your dog is cute
-90.66 Nothing is what everything isn't
```
`python hugging-face-bug-report.py -m microsoft/MiniLM-L12-H384-uncased`
```
Starting microsoft/MiniLM-L12-H384-uncased on cpu [if available]
-78.01 Hello, my dog is cute
-75.90 Hello, your dog is cute
-83.02 Nothing is what everything isn't
```
- `transformers` version: 2.11.0
- Platform: macOS
- Python version: Python 3.6.10 :: Anaconda, Inc.
- PyTorch version (GPU?): 1.5.0 , No GPU
- Tensorflow version (GPU?): NA
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## Script
```
# hugging-face-bug-report.py
#!/usr/bin/env python3
import torch
import argparse
from transformers import AutoTokenizer, AutoModelWithLMHead
LABEL_FIELD_DICT = {'gpt2': 'labels',
'distilgpt2': 'labels',
'microsoft/MiniLM-L12-H384-uncased': 'lm_labels',
'microsoft/DialoGPT-small': 'labels'}
class LM(object):
def __init__(self, model_name='gpt2', device='cpu'):
print('Starting {} on {} [if available]'.format(model_name, device))
self.model_name = model_name
self.device = torch.device(device if torch.cuda.is_available() else 'cpu')
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelWithLMHead.from_pretrained(model_name).to(self.device)
def prepare_batch(self, texts):
tokenized_input = []
for text in texts:
text_ids = self.tokenizer.encode(text, add_special_tokens=True)
tokenized_input.append(text_ids)
lens = list(map(len, tokenized_input))
maxlen = max(lens)
for i, t in enumerate(tokenized_input):
tokenized_input[i] += [self.tokenizer.unk_token_id] * (maxlen - len(t))
return torch.tensor(tokenized_input), torch.tensor(lens)
def score(self, texts):
with torch.no_grad():
tensor_input, lens = self.prepare_batch(texts)
mask = torch.arange(tensor_input.size(1))[None, :] < lens[:, None]
labels = tensor_input.clone().detach()
labels[~mask] = -100
params = list(map(lambda x: x.to(self.device),
[tensor_input, mask, labels]))
inputs = {'input_ids': params[0], 'attention_mask': params[1],
LABEL_FIELD_DICT[self.model_name]: params[2]}
outputs = self.model(**inputs)
loss, logits = outputs[:2]
log_posteriors = torch.log(torch.nn.Softmax(dim=2)(logits))
results = []
total_lp = 0.0
for i, text in enumerate(texts):
ids = tensor_input[i, :]
lp = log_posteriors[i, :, :]
sum_lp = 0.0
for j, k in enumerate(ids.tolist()[1:]):
if j + 1 >= lens[i]: break
sum_lp += lp[j, k]
results.append((text, sum_lp))
total_lp += sum_lp
total_lp_alternative = -loss * torch.sum(lens - 1)
assert(torch.isclose(total_lp_alternative, total_lp)), \
"{:.3f} โ {:.3f}".format(total_lp_alternative, total_lp)
return results
def get_available_devices():
return ['cpu'] + ['cuda:{}'.format(idx)
for idx in range(torch.cuda.device_count())]
if __name__ == "__main__":
model_choices = list(LABEL_FIELD_DICT.keys())
parser = argparse.ArgumentParser('Runninig LM on sample text from CLI')
parser.add_argument('-m', '--model', help='model type', default='gpt2',
choices=model_choices)
parser.add_argument('-d', '--device', help='device', default='cpu',
choices=get_available_devices())
args = parser.parse_args()
lm = LM(model_name=args.model, device=args.device)
test_inputs = ["Hello, my dog is cute", "Hello, your dog is cute",
"Nothing is what everything isn't"]
results = lm.score(test_inputs)
for text, score in results:
print(f"{score.item():.2f} {text}")
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5434/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5433 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5433/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5433/comments | https://api.github.com/repos/huggingface/transformers/issues/5433/events | https://github.com/huggingface/transformers/pull/5433 | 648,873,762 | MDExOlB1bGxSZXF1ZXN0NDQyNjA4NzA5 | 5,433 | [Reformer] Add QA head to reformer model | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5433?src=pr&el=h1) Report\n> Merging [#5433](https://codecov.io/gh/huggingface/transformers/pull/5433?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/87716a6d072b2b66415ce43086c73b04e63fe0fe&el=desc) will **increase** coverage by `0.56%`.\n> The diff coverage is `65.71%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5433?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5433 +/- ##\n==========================================\n+ Coverage 77.69% 78.25% +0.56% \n==========================================\n Files 140 140 \n Lines 24334 24368 +34 \n==========================================\n+ Hits 18906 19070 +164 \n+ Misses 5428 5298 -130 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5433?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.22% <รธ> (รธ)` | |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.11% <64.70%> (-1.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `74.41% <100.00%> (รธ)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.68% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.10% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (+2.28%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5433/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5433?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5433?src=pr&el=footer). Last update [87716a6...e892adb](https://codecov.io/gh/huggingface/transformers/pull/5433?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | MEMBER | null | This PR adds `ReformerForQuestionAnswering`. At the moment there are no pretrained weights for Reformer QA, so that no example is added.
Checked all tests including RUN_SLOW on GPU => all pass. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5433/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5433/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5433",
"html_url": "https://github.com/huggingface/transformers/pull/5433",
"diff_url": "https://github.com/huggingface/transformers/pull/5433.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5433.patch",
"merged_at": 1593620835000
} |
https://api.github.com/repos/huggingface/transformers/issues/5432 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5432/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5432/comments | https://api.github.com/repos/huggingface/transformers/issues/5432/events | https://github.com/huggingface/transformers/pull/5432 | 648,870,373 | MDExOlB1bGxSZXF1ZXN0NDQyNjA1ODY2 | 5,432 | Create model card | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5432?src=pr&el=h1) Report\n> Merging [#5432](https://codecov.io/gh/huggingface/transformers/pull/5432?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d60d231ea497aa2ed46226f51e360b207a79682e&el=desc) will **increase** coverage by `0.23%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5432?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5432 +/- ##\n==========================================\n+ Coverage 77.61% 77.84% +0.23% \n==========================================\n Files 140 140 \n Lines 24343 24343 \n==========================================\n+ Hits 18893 18951 +58 \n+ Misses 5450 5392 -58 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5432?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5432/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5432/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5432/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.10% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5432/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.40% <0.00%> (+0.71%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5432/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5432/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (+2.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5432/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5432?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5432?src=pr&el=footer). Last update [d60d231...c047d80](https://codecov.io/gh/huggingface/transformers/pull/5432?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | Create model card for electra-base-discriminator fine-tuned on SQUAD v1.1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5432/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5432",
"html_url": "https://github.com/huggingface/transformers/pull/5432",
"diff_url": "https://github.com/huggingface/transformers/pull/5432.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5432.patch",
"merged_at": 1593699391000
} |
https://api.github.com/repos/huggingface/transformers/issues/5431 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5431/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5431/comments | https://api.github.com/repos/huggingface/transformers/issues/5431/events | https://github.com/huggingface/transformers/issues/5431 | 648,864,061 | MDU6SXNzdWU2NDg4NjQwNjE= | 5,431 | Can't load to predict a reproduced DistilBERT | {
"login": "learnercat",
"id": 25918640,
"node_id": "MDQ6VXNlcjI1OTE4NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25918640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/learnercat",
"html_url": "https://github.com/learnercat",
"followers_url": "https://api.github.com/users/learnercat/followers",
"following_url": "https://api.github.com/users/learnercat/following{/other_user}",
"gists_url": "https://api.github.com/users/learnercat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/learnercat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/learnercat/subscriptions",
"organizations_url": "https://api.github.com/users/learnercat/orgs",
"repos_url": "https://api.github.com/users/learnercat/repos",
"events_url": "https://api.github.com/users/learnercat/events{/privacy}",
"received_events_url": "https://api.github.com/users/learnercat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have tested reproducing **[Fine Tuning Transformer for MultiClass Text Classification](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb)** successfully. But I tried to load the model and vocab files from a spirited file predict_distilbert.ipynb as below:\r\n`# Importing libraries`\r\n`import pandas as pd`\r\n`import torch`\r\n`import transformers`\r\n`import numpy as np`\r\n`from torch.utils.data import Dataset, DataLoader`\r\n`from transformers import DistilBertModel, DistilBertTokenizer`\r\n`test_string = \"The temperature, relative humidity and wind information shown above are the respective forecasts over a 24-hour period.\"`\r\n`tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-cased','models/vocab_distilbert_news.bin')`\r\n`load_model = DistilBertModel.from_pretrained('distilbert-base-cased','models/pytorch_distilbert_news.bin')`\r\nThen I got \"TypeError ---> Traceback (most recent call last)\"\r\n`<ipython-input-17-274a96b92c04> in <module>`\r\n`----> 1 tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-cased','models/vocab_distilbert_news.bin')`\r\n`2 load_model = DistilBertModel.from_pretrained('distilbert-base-cased','models/pytorch_distilbert_news.bin')`\r\n`~/anaconda3/envs/hgface/lib/python3.7/site-packages/transformers/tokenization_utils.py in from_pretrained(cls, *inputs, **kwargs)`\r\n`--> 911 return cls._from_pretrained(*inputs, **kwargs)`\r\n`912 \r\n 913 @classmethod`\r\n`~/anaconda3/envs/hgface/lib/python3.7/site-packages/transformers/tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)`\r\n`1060 # Instantiate tokenizer.`\r\n`1061 try:`\r\n`-> 1062 tokenizer = cls(*init_inputs, **init_kwargs)`\r\n`1063 except OSError:`\r\n`1064 raise OSError(`\r\n`TypeError: __init__() got multiple values for argument 'vocab_file'`\r\n\r\nPlease help!\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"# Solved this problem in [Fine tuning DistilBERT model OSError: Unable to load weights from pytorch checkpoint file. #4](https://github.com/abhimishra91/transformers-tutorials/issues/4)"
] | 1,593 | 1,594 | 1,594 | NONE | null | How to load and predict a fine tuning DistilBert Multi Classification Model? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5431/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5430 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5430/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5430/comments | https://api.github.com/repos/huggingface/transformers/issues/5430/events | https://github.com/huggingface/transformers/pull/5430 | 648,843,815 | MDExOlB1bGxSZXF1ZXN0NDQyNTgzNTMw | 5,430 | Create model card | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5430?src=pr&el=h1) Report\n> Merging [#5430](https://codecov.io/gh/huggingface/transformers/pull/5430?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d60d231ea497aa2ed46226f51e360b207a79682e&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5430?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5430 +/- ##\n==========================================\n- Coverage 77.61% 77.60% -0.01% \n==========================================\n Files 140 140 \n Lines 24343 24343 \n==========================================\n- Hits 18893 18892 -1 \n- Misses 5450 5451 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5430?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5430?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5430?src=pr&el=footer). Last update [d60d231...e3436bf](https://codecov.io/gh/huggingface/transformers/pull/5430?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | Create model card for electra-small-discriminator finetuned on SQUAD v1.1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5430/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5430",
"html_url": "https://github.com/huggingface/transformers/pull/5430",
"diff_url": "https://github.com/huggingface/transformers/pull/5430.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5430.patch",
"merged_at": 1594118502000
} |
https://api.github.com/repos/huggingface/transformers/issues/5429 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5429/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5429/comments | https://api.github.com/repos/huggingface/transformers/issues/5429/events | https://github.com/huggingface/transformers/pull/5429 | 648,834,287 | MDExOlB1bGxSZXF1ZXN0NDQyNTc1NDQ2 | 5,429 | QA Pipelines fixes | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe we should fix this upstream. I wanted to keep identical behavior for `squad_convert_examples_to_features` while moving the code to the new tokenizer API but maybe I missed something.",
"@thomwolf I removed the commit on the padding part to make sure things continue to work at very short term. \r\n\r\nAlso, after looking at the code, I've the feeling it requires quite a bit of refactoring that might live in its own PR, so I prefer isolate the few changes here and the padding stuff.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5429?src=pr&el=h1) Report\n> Merging [#5429](https://codecov.io/gh/huggingface/transformers/pull/5429?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9a473f1e43221348334b9e7f95bb45770b7ef268&el=desc) will **decrease** coverage by `1.08%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5429?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5429 +/- ##\n==========================================\n- Coverage 77.85% 76.77% -1.09% \n==========================================\n Files 138 138 \n Lines 24314 24314 \n==========================================\n- Hits 18930 18667 -263 \n- Misses 5384 5647 +263 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5429?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.31% <100.00%> (รธ)` | |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.09% <0.00%> (-0.44%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5429?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5429?src=pr&el=footer). Last update [9a473f1...55e2f90](https://codecov.io/gh/huggingface/transformers/pull/5429?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | MEMBER | null | **1. Some newly introduced models such as [bart-large-finetuned-squadv1](https://huggingface.co/valhalla/bart-large-finetuned-squadv1) have more than 2 outputs by default on the QA pipeline which is not supported.**
- This PR makes it possible to support such outputs and assumes the 2 first elements are the actual `start`, `end `logits.
**2. Minor refactoring of the decoding strategy:**
- Actually mask the padding & question **before** applying the softmax to extract answer
- Use the stabilized version of the `softmax `in log-space
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5429/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5429",
"html_url": "https://github.com/huggingface/transformers/pull/5429",
"diff_url": "https://github.com/huggingface/transformers/pull/5429.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5429.patch",
"merged_at": 1593764961000
} |
https://api.github.com/repos/huggingface/transformers/issues/5428 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5428/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5428/comments | https://api.github.com/repos/huggingface/transformers/issues/5428/events | https://github.com/huggingface/transformers/issues/5428 | 648,824,383 | MDU6SXNzdWU2NDg4MjQzODM= | 5,428 | How to use (and preferably finetune) BART for text infilling? | {
"login": "tomaszgarbus",
"id": 11790160,
"node_id": "MDQ6VXNlcjExNzkwMTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/11790160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaszgarbus",
"html_url": "https://github.com/tomaszgarbus",
"followers_url": "https://api.github.com/users/tomaszgarbus/followers",
"following_url": "https://api.github.com/users/tomaszgarbus/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaszgarbus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaszgarbus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaszgarbus/subscriptions",
"organizations_url": "https://api.github.com/users/tomaszgarbus/orgs",
"repos_url": "https://api.github.com/users/tomaszgarbus/repos",
"events_url": "https://api.github.com/users/tomaszgarbus/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaszgarbus/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] | closed | false | null | [] | [
"@julien-c , @sshleifer ?",
"Sorry for the slow response.\r\nUnfortunately, text infilling is not yet supported. It would be a welcome contribution! I think the equivalent fairseq task is called `DenoisingTask` \r\nhttps://github.com/pytorch/fairseq/blob/aa79bb9c37b27e3f84e7a4e182175d3b50a79041/fairseq/tasks/denoising.py#L27",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,600 | 1,600 | NONE | null | [Here](https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration) is shown how to use BART for simple mask filling (one <mask> token = one generated token), but how to use it for text infilling? The BART paper states that the model was pretrained on such task so it should be possible.
Is the only solution to simply take the `facebook/bart-large` model for summarization and finetune it on a dataset with <mask> tokens or is there a better way?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5428/reactions",
"total_count": 15,
"+1": 15,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5428/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5427 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5427/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5427/comments | https://api.github.com/repos/huggingface/transformers/issues/5427/events | https://github.com/huggingface/transformers/issues/5427 | 648,809,243 | MDU6SXNzdWU2NDg4MDkyNDM= | 5,427 | WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model/bert/pooler/dense/kernel:0', 'tf_bert_model/bert/pooler/dense/bias:0'] when minimizing the loss. WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model/bert/pooler/dense/kernel:0', 'tf_bert_model/bert/pooler/dense/bias:0'] when minimizing the loss. | {
"login": "dhirajgite",
"id": 56394689,
"node_id": "MDQ6VXNlcjU2Mzk0Njg5",
"avatar_url": "https://avatars.githubusercontent.com/u/56394689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhirajgite",
"html_url": "https://github.com/dhirajgite",
"followers_url": "https://api.github.com/users/dhirajgite/followers",
"following_url": "https://api.github.com/users/dhirajgite/following{/other_user}",
"gists_url": "https://api.github.com/users/dhirajgite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhirajgite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhirajgite/subscriptions",
"organizations_url": "https://api.github.com/users/dhirajgite/orgs",
"repos_url": "https://api.github.com/users/dhirajgite/repos",
"events_url": "https://api.github.com/users/dhirajgite/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhirajgite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I am also encountering a similar issue from yesterday. It never happened before.\r\n```\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss.\r\n```",
"This https://github.com/huggingface/transformers/issues/5421#issuecomment-652626787 may be useful.",
"Closed by mistake",
"Hello everyone,\r\nI am fine-tuning a BERT model from huggingface transformers for Named Entity Recognition Task in tensorflow. The input to the model is a single word and output is a tag of that word. I have created a custom generator function (data_generator) from where I am getting data while training. I have freezed the bert layer in training mode and added some layers on top of it to predict the tag of the given word.\r\n\r\n**The code is this :** \r\n\r\n```python\r\nfrom tensorflow.keras.layers import Input, Dense, Activation, Dropout, LSTM, GlobalMaxPool1D\r\nfrom tensorflow.keras.models import Model\r\nfrom tensorflow.keras.utils import to_categorical\r\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences\r\n\r\nfrom transformers import BertTokenizer, TFBertModel, BertConfig\r\n\r\n##Load the BERT tokenizer.\r\n\r\nprint('Loading BERT tokenizer...')\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)\r\n\r\nbert = 'bert-base-uncased'\r\n\r\nconfig = BertConfig(dropout=0.2, attention_dropout=0.2)\r\nconfig.output_hidden_states = False\r\ntransformer_model = TFBertModel.from_pretrained(bert, config = config)\r\n\r\ninput_ids_in = Input(shape=(max_len,), name='input_token', dtype='int32')\r\ninput_masks_in = Input(shape=(max_len,), name='masked_token', dtype='int32')\r\nembedding_layer = transformer_model(input_ids_in, attention_mask=input_masks_in)[0]\r\n\r\nX = LSTM(50, return_sequences=True)(embedding_layer)\r\nX = GlobalMaxPool1D()(X)\r\nX = Dense(50, activation='relu')(X)\r\nX = Dropout(0.2)(X)\r\nX = Dense(num_labels, activation='softmax')(X)\r\n\r\nmodel = Model(inputs=[input_ids_in, input_masks_in], outputs = X)\r\n\r\nfor layer in model.layers[:3]:\r\n layer.trainable = False\r\n\r\nmodel.compile(loss='categorical_crossentropy', optimizer='adam')\r\n\r\ntrain_gen = data_generator(sentences, tags, tag2ix, max_len, number_sent_per_batch)\r\nmodel.fit(train_gen, epochs=1, steps_per_epoch=steps, verbose=1)\r\n```\r\n\r\n**The error I am getting is this :** \r\n```python\r\nValueError: in user code:\r\n\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function *\r\n outputs = self.distribute_strategy.run(\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run **\r\n return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica\r\n return self._call_for_each_replica(fn, args, kwargs)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica\r\n return fn(*args, **kwargs)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:541 train_step **\r\n self.trainable_variables)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1804 _minimize\r\n trainable_variables))\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:521 _aggregate_gradients\r\n filtered_grads_and_vars = _filter_grads(grads_and_vars)\r\n /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1219 _filter_grads\r\n ([v.name for _, v in grads_and_vars],))\r\n\r\n ValueError: No gradients provided for any variable: ['lstm_2/lstm_cell_2/kernel:0', 'lstm_2/lstm_cell_2/recurrent_kernel:0', 'lstm_2/lstm_cell_2/bias:0', 'dense_8/kernel:0', 'dense_8/bias:0', 'dense_9/kernel:0', 'dense_9/bias:0'].\r\n```\r\n\r\nI have gone through many links like :\r\n\r\n<https://github.com/tensorflow/tensorflow/issues/1511>\r\n\r\n<https://github.com/tensorflow/tensorflow/issues/27949>\r\n\r\n<https://github.com/huggingface/transformers/issues/5421>\r\n\r\nand many more.\r\n\r\nThere are many solutions provided in these github issues but couldn't find the solution of my error. I have even posted on stackoverflow (<https://stackoverflow.com/questions/62863374/valueerror-no-gradients-provided-for-any-variable-in-tensorflow-2-2-0>) but couldn't find the solution.\r\n\r\nIf someone can point the mistake, it would be of great help. Thanks in advance!\r\n\r\nTensorflow Version : 2.2.0",
"๐",
"้่ฆๅป็ปไธ้จๅๅๆฐ๏ผ ๆ็็่งฃๆฏbertๆจกๅ่ชๅธฆ็ๅฉ็จCLSๅ็ๅ็ฑปๆจกๅ๏ผๆฒกๆไฝฟ็จๆๅ็ๆถๅ่ฟไผ ๅ
ฅไบgradient่ฟ่กๆจกๅ็ๆขฏๅบฆๆดๆฐ๏ผๆไปฅไผๆฅ้ใ่ฆๅป็ป้ฃ้จๅๅๆฐ๏ผ๏ผ๏ผ"
] | 1,593 | 1,663 | 1,594 | NONE | null | # ๐ Bug
## Information
model I am using (Bert, XLNet ...): Bert
language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
ip1 = Input(shape = (max_length+2,), dtype="int32")
ip2 = Input(shape = (max_length+2,), dtype="int32")
ip3 = Input(shape = (max_length+2,), dtype="int32")
Bert_model = TFBertModel.from_pretrained('bert-base-uncased')
ip = Bert_model(ip1, attention_mask=ip2, token_type_ids=ip3)[0][:,1:-1,:]
out = Bidirectional(LSTM(units=768))(ip)
out = Dense(384, activation='relu')(out)
out = Dropout(0.2)(out)
out = Dense(units=9, activation="softmax")(out)
model = Model([ip1, ip2, ip3], out)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
When use Model.fit gives following warnings
WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model/bert/pooler/dense/kernel:0', 'tf_bert_model/bert/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model/bert/pooler/dense/kernel:0', 'tf_bert_model/bert/pooler/dense/bias:0'] when minimizing the loss.
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:windows 10
- Python version:3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):2.1.0
- Using GPU in script?:NO- Using distributed or parallel set-up in script?:NO
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5427/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5426 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5426/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5426/comments | https://api.github.com/repos/huggingface/transformers/issues/5426/events | https://github.com/huggingface/transformers/pull/5426 | 648,782,339 | MDExOlB1bGxSZXF1ZXN0NDQyNTMyNjAw | 5,426 | [Reformer] Add Masked LM Reformer | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5426?src=pr&el=h1) Report\n> Merging [#5426](https://codecov.io/gh/huggingface/transformers/pull/5426?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/35befd9ce31c23a774fd34f57bc44033ce70141d&el=desc) will **increase** coverage by `0.29%`.\n> The diff coverage is `96.15%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5426?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5426 +/- ##\n==========================================\n+ Coverage 77.57% 77.86% +0.29% \n==========================================\n Files 141 140 -1 \n Lines 24581 24368 -213 \n==========================================\n- Hits 19068 18974 -94 \n+ Misses 5513 5394 -119 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5426?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.22% <รธ> (รธ)` | |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `89.45% <96.00%> (+1.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `74.41% <100.00%> (รธ)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `78.26% <0.00%> (-7.46%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `76.84% <0.00%> (-0.71%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.71% <0.00%> (-0.50%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.31% <0.00%> (-0.48%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <0.00%> (-0.39%)` | :arrow_down: |\n| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/5426/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5426?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5426?src=pr&el=footer). Last update [35befd9...4e52c6b](https://codecov.io/gh/huggingface/transformers/pull/5426?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Yeah, I'm only adding one assert, which forces `ReformerLMHead` to not have bi-directional attention, but I doubt anybody has used this yet anyways",
"When I make the sequences shorter but increase the batch size an zero division error returns.\r\nDo I need to take care about something specific ?",
"> When I make the sequences shorter but increase the batch size an zero division error returns.\r\n> Do I need to take care about something specific ?\r\n\r\nHey @flozi00, would be great if you can open an issue with environment info and code so that I can reproduce :-) "
] | 1,593 | 1,593 | 1,593 | MEMBER | null | Similar to BERT, Reformer LM model is split into two:
- The standard Causal Language Modeling Reformer `ReformerModelWithLMHead`: Here we have a tiny breaking change as `ReformerModelWithLMHead` can no longer be used with bi-directional self-attention. This option should not really have been used anyways as there are no pretrained weights
- A masked language model Reformer `ReformerForMaskedLM`.
Here a colab notebook showcasing how to use Reformer for MLM: https://colab.research.google.com/drive/1tzzh0i8PgDQGV3SMFUGxM7_gGae3K-uW?usp=sharing
Checked all tests including RUN_SLOW on GPU => all pass. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5426/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5426/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5426",
"html_url": "https://github.com/huggingface/transformers/pull/5426",
"diff_url": "https://github.com/huggingface/transformers/pull/5426.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5426.patch",
"merged_at": 1593636198000
} |
https://api.github.com/repos/huggingface/transformers/issues/5425 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5425/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5425/comments | https://api.github.com/repos/huggingface/transformers/issues/5425/events | https://github.com/huggingface/transformers/issues/5425 | 648,767,250 | MDU6SXNzdWU2NDg3NjcyNTA= | 5,425 | [Quick poll] Give your opinion on the future of ๐ค transformers | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | MEMBER | null | The ๐ค transformers library is at a crossroad ๐ and could evolve in many directions, from teaching to research & applications.
We made a quick poll to get your opinion.
If you have 2-3 minutes and want to participate in shaping the future of the library ๐ https://docs.google.com/forms/d/e/1FAIpQLSeKWNE1SyaSvqLYxWQxTA_XeRCVm3_ohmr3UXJgpIxzZhSXlg/viewform
(please reply in the above feedback form rather than to this thread) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5425/reactions",
"total_count": 13,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 5,
"eyes": 3
} | https://api.github.com/repos/huggingface/transformers/issues/5425/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5424 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5424/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5424/comments | https://api.github.com/repos/huggingface/transformers/issues/5424/events | https://github.com/huggingface/transformers/issues/5424 | 648,758,667 | MDU6SXNzdWU2NDg3NTg2Njc= | 5,424 | Bart EncoderLayer masked_fill not working properly with pytorch 1.4 | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hello, I still hava this problem when my pytorch were upgraded to 1.5 . I don't know if it's related to the python version . \r\nCan you give me some suggestions ๏ผ Thank you so much!\r\n\r\nInformation\r\n` File \"/home/ynos/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/functional.py\", line 3937, in multi_head_attention_forward\r\n float('-inf'),\r\nRuntimeError: Expected object of scalar type Bool but got scalar type Long for argument #2 'mask' in call to _th_masked_fill_bool_\r\n`\r\n\r\nEnvironment info:\r\n- Python version: 3.6.2\r\n- PyTorch version (GPU): 1.5"
] | 1,593 | 1,617 | 1,599 | NONE | null | # ๐ Bug
## Information
Model I am using (Bert, XLNet ...): Bart
I'm trying to use the EncoderLayer of Bart but I realized that `attn_weights = attn_weights.masked_fill(reshaped, float("-inf"))` at line [659](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L659) does not work when `reshaped` is an `int` or `float` tensor and throws the following error:
```
attn_weights = attn_weights.masked_fill(reshaped, float("-inf"))
RuntimeError: Expected object of scalar type Bool but got scalar type Float for argument #2 'mask' in call to _th_masked_fill_bool_
```
However it does not raise an error when I change `reshaped` type to bool but in that case it returns a tensor of `nan` values.
With @patrickvonplaten helps, we realized that it was related to a pytorch version because upgrading my torch version from 1.4 to 1.5 solved the problem
## To reproduce
```
from transformers.modeling_bart import EncoderLayer
from transformers import BartConfig
import torch
hidden_states = torch.tensor(3 * [ 7 * [ 1024 * [0.4]]])
attn_mask = torch.ones(hidden_states.shape[:2])
layer = EncoderLayer(BartConfig())
layer(hidden_states.transpose(0, 1), attn_mask)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0
- Python version: 3.6
- PyTorch version (GPU?):1.4
- Using GPU in script?: no
- Using distributed or parallel set-up in script?:no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5424/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5423 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5423/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5423/comments | https://api.github.com/repos/huggingface/transformers/issues/5423/events | https://github.com/huggingface/transformers/issues/5423 | 648,639,186 | MDU6SXNzdWU2NDg2MzkxODY= | 5,423 | Error Instantiating T5-11B from conributed models | {
"login": "lordtt13",
"id": 35500534,
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lordtt13",
"html_url": "https://github.com/lordtt13",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"same result. I can't download",
"Please see https://github.com/huggingface/transformers/issues/5986#issuecomment-663090043",
"Works when I use:\r\n\r\n```python\r\nimport transformers\r\n\r\nt5 = transformers.AutoModel.from_pretrained('t5-11b', use_cdn = False)\r\n```\r\n\r\nThank You!"
] | 1,593 | 1,595 | 1,595 | CONTRIBUTOR | null | # ๐ Bug
## Information
Model I am using : T5-11B
Language I am using the model on: English
The problem arises when using:
when I try downloading the T5-11B model
The tasks I am working on is:
Evaluating ROGUE score on CNN dataset
## To reproduce
Steps to reproduce the behavior:
Just try instantiating the T5-11B model using the AutoModel Class
Error Message:
OSError: Can't load weights for 't5-11b'. Make sure that:
- 't5-11b' is a correct model identifier listed on 'https://huggingface.co/models'
- or 't5-11b' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
## Expected behavior
Would instatntiate the
## Environment info
- `transformers` version: 2.11.0
- Platform: Linux-5.3.0-61-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5423/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5423/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5422 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5422/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5422/comments | https://api.github.com/repos/huggingface/transformers/issues/5422/events | https://github.com/huggingface/transformers/pull/5422 | 648,607,773 | MDExOlB1bGxSZXF1ZXN0NDQyMzg5NDQ4 | 5,422 | Create README.md | {
"login": "DeepsMoseli",
"id": 29062994,
"node_id": "MDQ6VXNlcjI5MDYyOTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/29062994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DeepsMoseli",
"html_url": "https://github.com/DeepsMoseli",
"followers_url": "https://api.github.com/users/DeepsMoseli/followers",
"following_url": "https://api.github.com/users/DeepsMoseli/following{/other_user}",
"gists_url": "https://api.github.com/users/DeepsMoseli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DeepsMoseli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DeepsMoseli/subscriptions",
"organizations_url": "https://api.github.com/users/DeepsMoseli/orgs",
"repos_url": "https://api.github.com/users/DeepsMoseli/repos",
"events_url": "https://api.github.com/users/DeepsMoseli/events{/privacy}",
"received_events_url": "https://api.github.com/users/DeepsMoseli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Cool",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5422?src=pr&el=h1) Report\n> Merging [#5422](https://codecov.io/gh/huggingface/transformers/pull/5422?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fcf0652460753f8a81f7576e8abdaa6b3742f00e&el=desc) will **decrease** coverage by `0.41%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5422?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5422 +/- ##\n==========================================\n- Coverage 76.69% 76.28% -0.42% \n==========================================\n Files 140 140 \n Lines 24343 24343 \n==========================================\n- Hits 18671 18570 -101 \n- Misses 5672 5773 +101 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5422?src=pr&el=tree) | Coverage ฮ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.92% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.31% <0.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.71% <0.00%> (+1.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+13.07%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5422?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `ฮ = absolute <relative> (impact)`, `รธ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5422?src=pr&el=footer). Last update [fcf0652...a0ee0ae](https://codecov.io/gh/huggingface/transformers/pull/5422?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | Card for my model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5422/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5422",
"html_url": "https://github.com/huggingface/transformers/pull/5422",
"diff_url": "https://github.com/huggingface/transformers/pull/5422.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5422.patch",
"merged_at": 1593594111000
} |
https://api.github.com/repos/huggingface/transformers/issues/5421 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5421/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5421/comments | https://api.github.com/repos/huggingface/transformers/issues/5421/events | https://github.com/huggingface/transformers/issues/5421 | 648,604,879 | MDU6SXNzdWU2NDg2MDQ4Nzk= | 5,421 | What to do about this warning message: "Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification" | {
"login": "ohmeow",
"id": 14000,
"node_id": "MDQ6VXNlcjE0MDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/14000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ohmeow",
"html_url": "https://github.com/ohmeow",
"followers_url": "https://api.github.com/users/ohmeow/followers",
"following_url": "https://api.github.com/users/ohmeow/following{/other_user}",
"gists_url": "https://api.github.com/users/ohmeow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ohmeow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ohmeow/subscriptions",
"organizations_url": "https://api.github.com/users/ohmeow/orgs",
"repos_url": "https://api.github.com/users/ohmeow/repos",
"events_url": "https://api.github.com/users/ohmeow/events{/privacy}",
"received_events_url": "https://api.github.com/users/ohmeow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not sure what's happening with the multiple duplicate opened issues, @ohmeow?\r\n\r\nIs GitHub flaky again? :)",
"I am also encountering the same warning. \r\n\r\nWhen loading the model\r\n```\r\nSome weights of the model checkpoint at bert-base-uncased were not used when initializing TFBertModel: ['nsp___cls', 'mlm___cls']\r\n- This IS expected if you are initializing TFBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\r\n- This IS NOT expected if you are initializing TFBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nAll the weights of TFBertModel were initialized from the model checkpoint at bert-base-uncased.\r\nIf your task is similar to the task the model of the ckeckpoint was trained on, you can already use TFBertModel for predictions without further training.\r\n```\r\n\r\nWhen attempting to fine tune it:\r\n```\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss.\r\n```\r\n\r\nIs the model correctly fine-tuning? Are the pre-trained model weights also getting updated (fine-tuned) or only the layers outside(above) the pre-trained model are changing their weights while training?\r\n",
"> Not sure what's happening with the multiple duplicate opened issues, @ohmeow?\r\n> \r\n> Is GitHub flaky again? :)\r\n\r\nI noticed the same thing. Not sure what is going on ... but I swear I only opened this one :)",
"@ohmeow you're loading the `bert-base-cased` checkpoint (which is a checkpoint that was trained using a similar architecture to `BertForPreTraining`) in a `BertForSequenceClassification` model.\r\n\r\nThis means that:\r\n\r\n- The layers that `BertForPreTraining` has, but `BertForSequenceClassification` does not have will be discarded\r\n- The layers that `BertForSequenceClassification` has but `BertForPreTraining` does not have will be randomly initialized.\r\n\r\nThis is expected, and tells you that you won't have good performance with your `BertForSequenceClassification` model before you fine-tune it :slightly_smiling_face:.\r\n\r\n\r\n@fliptrail this warning means that during your training, you're not using the `pooler` in order to compute the loss. I don't know how you're finetuning your model, but if you're not using the pooler layer then there's no need to worry about that warning.",
"@LysandreJik Thank you for your response. \r\nI am using the code: \r\n```\r\ndef main_model():\r\n encoder = ppd.TFBertModel.from_pretrained(\"bert-base-uncased\")\r\n input_ids = tf.keras.layers.Input(shape=(max_seq_len,), dtype=tf.int32)\r\n token_type_ids = tf.keras.layers.Input(shape=(max_seq_len,), dtype=tf.int32)\r\n attention_mask = tf.keras.layers.Input(shape=(max_seq_len,), dtype=tf.int32)\r\n\r\n embedding = encoder(input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask)[0]\r\n\r\n pooling = tf.keras.layers.GlobalAveragePooling1D()(embedding)\r\n normalization = tf.keras.layers.BatchNormalization()(pooling)\r\n dropout = tf.keras.layers.Dropout(0.1)(normalization)\r\n\r\n out = tf.keras.layers.Dense(1, activation=\"sigmoid\", name=\"final_output_bert\")(dropout)\r\n\r\n model = tf.keras.Model(inputs=[input_ids, token_type_ids, attention_mask], outputs=out)\r\n\r\n loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)\r\n optimizer = tf.keras.optimizers.Adam(lr=2e-5)\r\n metrics=['accuracy', tf.keras.metrics.FalseNegatives(), tf.keras.metrics.FalsePositives()]\r\n\r\n model.compile(optimizer=optimizer, loss=loss, metrics=metrics)\r\n return model\r\n\r\nmodel = main_model()\r\nmodel.summary()\r\n```\r\n\r\nI am only using the `TFBertModel.from_pretrained(\"bert-base-uncased\")` pre-built class. I am not initializing it from any other class. Still, I am encountering the warning. From what I can understand this should only appear when initializing given pre-trained model inside another class. \r\nAm I fine-tuning correctly? Are the BERT layer weights also getting updated?\r\n\r\nWarning while loading model:\r\n```\r\nSome weights of the model checkpoint at bert-base-uncased were not used when initializing TFBertModel: ['nsp___cls', 'mlm___cls']\r\n- This IS expected if you are initializing TFBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\r\n- This IS NOT expected if you are initializing TFBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nAll the weights of TFBertModel were initialized from the model checkpoint at bert-base-uncased.\r\nIf your task is similar to the task the model of the ckeckpoint was trained on, you can already use TFBertModel for predictions without further training.\r\n```\r\n\r\nWhile attempting to train:\r\n```\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_bert_model_1/bert/pooler/dense/kernel:0', 'tf_bert_model_1/bert/pooler/dense/bias:0'] when minimizing the loss.\r\n```\r\n\r\nThis warning only started to appear from yesterday in all my codes and other sample codes given.",
"Hello everyone,\r\nI also start getting this error today. before today it was working fine. Are there any changes that take place in colab?\r\nThis is the code I am using:\r\n\r\n !pip install transformers\r\n import TensorFlow as to\r\n import transformers\r\n from transformers import TFBertForSequenceClassification, BertConfig\r\n tokenizer = transformers.BertTokenizer('gdrive/My Drive/Colab Notebooks/vocab.txt', do_lower_case=True)\r\n\r\n max_seq_length = 128\r\n\r\n bert = 'bert-large-uncased'\r\n config = BertConfig.from_pretrained('bert-large-uncased', output_hidden_states=True, hidden_dropout_prob=0.2, \r\n attention_probs_dropout_prob=0.2)\r\n\r\n transformer_model = TFBertForSequenceClassification.from_pretrained(bert, config=config)\r\n\r\n input_ids_in = tf.keras.layers.Input(shape=(max_seq_length,), name='input_token', dtype='int32')\r\n input_masks_in = tf.keras.layers.Input(shape=(max_seq_length,), name='masked_token', dtype='int32')\r\n input_segments_in = tf.keras.layers.Input(shape=(max_seq_length,), name='segment_ids', dtype='int32') \r\n\r\n embedding_layer = transformer_model(input_ids_in, attention_mask=input_masks_in, token_type_ids=input_segments_in)\r\n\r\nI have been using this same code for more than 2 weeks and no problem till yesterday.\r\nPlease if anyone finds the solution, share it.\r\nThank you",
"Thanks @LysandreJik \r\n\r\n> This is expected, and tells you that you won't have good performance with your BertForSequenceClassification model before you fine-tune it \r\n\r\nMakes sense. \r\n\r\nNow, how do we know what checkpoints are available that ***were*** trained on `BertForSequenceClassification`?",
"@fliptrail in your code you have the following:\r\n\r\n```py\r\nembedding = encoder(input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask)[0]\r\n```\r\n\r\nwhich means you're only getting the first output of the model, and using that to compute the loss. The first output of the model is the hidden states:\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_bert.py#L716-L738\r\n\r\n```\r\n Returns:\r\n :obj:`tuple(tf.Tensor)` comprising various elements depending on the configuration (:class:`~transformers.BertConfig`) and inputs:\r\n last_hidden_state (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`):\r\n Sequence of hidden-states at the output of the last layer of the model.\r\n pooler_output (:obj:`tf.Tensor` of shape :obj:`(batch_size, hidden_size)`):\r\n Last layer hidden-state of the first token of the sequence (classification token)\r\n further processed by a Linear layer and a Tanh activation function. The Linear\r\n layer weights are trained from the next sentence prediction (classification)\r\n objective during Bert pretraining. This output is usually *not* a good summary\r\n of the semantic content of the input, you're often better with averaging or pooling\r\n the sequence of hidden-states for the whole input sequence.\r\n hidden_states (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``):\r\n tuple of :obj:`tf.Tensor` (one for the output of the embeddings + one for the output of each layer)\r\n of shape :obj:`(batch_size, sequence_length, hidden_size)`.\r\n Hidden-states of the model at the output of each layer plus the initial embedding outputs.\r\n attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``):\r\n tuple of :obj:`tf.Tensor` (one for each layer) of shape\r\n :obj:`(batch_size, num_heads, sequence_length, sequence_length)`:\r\n Attentions weights after the attention softmax, used to compute the weighted average in the self-attention\r\n heads.\r\n \"\"\"\r\n```\r\n\r\nYou're ignoring the second value which is the pooler output. The warnings are normal in your case.",
"@VaibhavBhatnagar17, these are warnings, not errors. What exact warning are you not understanding?",
"@ohmeow that really depends on what you want to do! Sequence classification is a large subject, with many different tasks. [Here's](https://huggingface.co/models/?filter=text-classification) a list of all available checkpoints fine-tuned on sequence classification (not all are for BERT, though!)\r\n\r\nPlease be aware that if you have a specific task in mind, you should fine-tune your model to that task.",
"@LysandreJik Hey, What I am not able to understand is that I was using this code for more than 2 weeks and no warning came up till yesterday. I haven't changed anything but suddenly this warning came up is confusing.\r\nI am not getting the same output dimension as before and not able to complete my project.\r\n",
"The warning came up yesterday because version 3.0.0 was released yesterday. It's weird that you saw an output dimension changed since yesterday. What's the error you get?",
"I see this same warning when initializing `BertForMaskedLM`, pasted in below for good measure. As other posters have mentioned, this warning began appearing only after upgrading to v3.0.0. \r\n\r\n```\r\nSome weights of the model checkpoint at bert-large-uncased-whole-word-masking were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']\r\n- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\r\n- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of BertForMaskedLM were not initialized from the model checkpoint at bert-large-uncased-whole-word-masking and are newly initialized: ['cls.predictions.decoder.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\nNote that my module imports/initializations essentially duplicate the snippet demonstrating cloze task usage at https://huggingface.co/bert-large-uncased-whole-word-masking?text=Paris+is+the+%5BMASK%5D+of+France.\r\n\r\n```\r\nfrom transformers import BertTokenizer, BertForMaskedLM\r\n\r\n_tokenizer = BertTokenizer.from_pretrained(\r\n 'bert-large-uncased-whole-word-masking')\r\n_model = BertForMaskedLM.from_pretrained(\r\n 'bert-large-uncased-whole-word-masking')\r\n```\r\n\r\nAm I correct in assuming that nothing has changed in the behavior of the relevant model, but that perhaps this warning should have been being printed all along?",
"You're right, this has always been the behavior of the models. It wasn't clear enough before, so we've clarified it with this warning.",
"Thanks, @LysandreJik .",
"Anyone knows how to suppress this warning? I am aware that the model needs fine-tuning and I am fine-tuning it so, it becomes annoying to see this over and over again.",
"You can manage the warnings with the `logging` utility introduced in version 3.1.0:\r\n\r\n```py\r\nfrom transformers import logging\r\n\r\nlogging.set_verbosity_warning()\r\n```",
"@LysandreJik Thanks for the rapid response, I set it with set_verbosity_error()\r\n",
"@LysandreJik - So , by default bert-base-uncased loading from ```TFBertModel``` has ```199``` variables ```[ 3embedding + 2 layer norms + (16 x 12 layers) + 2 (pooler kernel and bias )] ```. \r\n\r\nBut when loading from ```TFBertForMaskedLM```, it has ```204``` variables. Below are the 5 extra variables\r\n\r\n```\r\ntf_bert_for_masked_lm_1/mlm___cls/predictions/bias:0\r\ntf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/kernel:0\r\ntf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/bias:0\r\ntf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/gamma:0\r\ntf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/beta:0\r\n```\r\n\r\nSo that means , these 5 variables are randomly initialising right. \r\nAre these 5 variables required for MLM ( is this how it is in official tensorflow models ) \r\n\r\nOR\r\n\r\ncan we take output token embeddings ( before passing to mlm___cls ) ```( batch x sequence x embedding_dimension ) ```, multiply it with ```word_embedding matrix``` to produce ```( batch x sequence x vocab_size ) ``` and then use that for MLM loss . \r\n\r\n\r\n",
"@LysandreJik I'm having a slightly different issue here - I'm loading a sequence classification checkpoint in a `AutoModelForSequenceClassification` model. But I still get the warning. Here's my code:\r\n\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\r\nmodel = AutoModelForSequenceClassification.from_pretrained('roberta-large-mnli')\r\n```\r\n\r\nOutput:\r\n```\r\nSome weights of the model checkpoint at roberta-large-mnli were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']\r\n- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\r\n- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n```\r\n\r\nI believe it's NOT expected because I'm indeed initializing from a model that I expect to be exactly identical.\r\n\r\nI'm only starting to get this warning after upgrading to transformers v3 as well. I'm using 3.3.1 currently. Could you please help? Thanks!\r\n",
"@s4sarath I'm not sure I understand your question.\r\n\r\n@veronica320, the pooler layer is not used when doing sequence classification, so there's nothing to be worried about.\r\n\r\nThe pooler is the second output of the `RobertaModel`: \r\nhttps://github.com/huggingface/transformers/blob/v3.4.0/src/transformers/modeling_roberta.py#L691\r\n\r\nBut only the first output is used in the sequence classification model:\r\nhttps://github.com/huggingface/transformers/blob/v3.4.0/src/transformers/modeling_roberta.py#L1002",
"Thanks a lot!",
"@LysandreJik - Sorry to make you confused .\r\n```\r\ntf_bert_for_masked_lm_1/mlm___cls/predictions/bias:0\r\ntf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/kernel:0\r\ntf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/bias:0\r\ntf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/gamma:0\r\ntf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/beta:0\r\n```\r\nThe above 4 variables are randomly initialising right, means they were not a part of official BERT . \r\nAm i right?",
"Thank you for your explanation.\r\n\r\nActually these four variables shouldn't be initialized randomly, as they're part of BERT. The official BERT checkpoints contain two heads: the MLM head and the NSP head.\r\n\r\nYou can see it here:\r\n```py\r\n>>> from transformers import TFBertForMaskedLM\r\n>>> model = TFBertForMaskedLM.from_pretrained(\"bert-base-cased\")\r\n```\r\nAmong the logging, you should find this:\r\n```\r\nSome layers from the model checkpoint at bert-base-cased were not used when initializing TFBertForMaskedLM: ['nsp___cls']\r\n- This IS expected if you are initializing TFBertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\r\n- This IS NOT expected if you are initializing TFBertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nAll the layers of TFBertForMaskedLM were initialized from the model checkpoint at bert-base-cased.\r\n```\r\nThis tells you two things:\r\n- Some layers of the checkpoints are not used. These are `['nsp___cls']`, corresponding to the CLS head. Since we're using a `***ForMaskedLM`, it makes sense not to use the CLS head\r\n- All the layers of the model were initialized from the model checkpoint, as both the transformer layers and the MLM head were present in the checkpoint.\r\n\r\n\r\nIf you're getting those variables randomly initialized:\r\n```\r\ntf_bert_for_masked_lm_1/mlm___cls/predictions/bias:0\r\ntf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/kernel:0\r\ntf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/bias:0\r\ntf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/gamma:0\r\ntf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/beta:0\r\n```\r\nthen it means you're using a checkpoint that does not contain these variables. These are the MLM layers, so you're probably loading a checkpoint that was saved using an architecture that does not contain these layers. This can happen if you do the following:\r\n\r\n```py\r\n>>> from transformers import TFBertModel, TFBertForMaskedLM\r\n>>> model = TFBertModel.from_pretrained(\"bert-base-cased\")\r\n>>> model.save_pretrained(directory)\r\n>>> mlm_model = TFBertForMaskedLM.from_pretrained(directory)\r\n```\r\n\r\nI hope this answers your question!",
"Oh okay. Thank you so much for the clarification. When I looked at bert\nmodels from tf-hub , these 4 variables were not present. That was the\nreason for the confusion .\n\nOn Tue, Oct 27, 2020, 7:02 PM Lysandre Debut <[email protected]>\nwrote:\n\n> Thank you for your explanation.\n>\n> Actually these four variables shouldn't be initialized randomly, as\n> they're part of BERT. The official BERT checkpoints contain two heads: the\n> MLM head and the NSP head.\n>\n> You can see it here:\n>\n> >>> from transformers import TFBertForMaskedLM>>> model = TFBertForMaskedLM.from_pretrained(\"bert-base-cased\")\n>\n> Among the logging, you should find this:\n>\n> Some layers from the model checkpoint at bert-base-cased were not used when initializing TFBertForMaskedLM: ['nsp___cls']\n> - This IS expected if you are initializing TFBertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\n> - This IS NOT expected if you are initializing TFBertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n> All the layers of TFBertForMaskedLM were initialized from the model checkpoint at bert-base-cased.\n>\n> This tells you two things:\n>\n> - Some layers of the checkpoints are not used. These are ['nsp___cls'],\n> corresponding to the CLS head. Since we're using a ***ForMaskedLM, it\n> makes sense not to use the CLS head\n> - All the layers of the model were initialized from the model\n> checkpoint, as both the transformer layers and the MLM head were present in\n> the checkpoint.\n>\n> If you're getting those variables randomly initialized:\n>\n> tf_bert_for_masked_lm_1/mlm___cls/predictions/bias:0\n> tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/kernel:0\n> tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/dense/bias:0\n> tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/gamma:0\n> tf_bert_for_masked_lm_1/mlm___cls/predictions/transform/LayerNorm/beta:0\n>\n> then it means you're using a checkpoint that does not contain these\n> variables. These are the MLM layers, so you're probably loading a\n> checkpoint that was saved using an architecture that does not contain these\n> layers. This can happen if you do the following:\n>\n> >>> from transformers import TFBertModel, TFBertForMaskedLM>>> model = TFBertModel.from_pretrained(\"bert-base-cased\")>>> model.save_pretrained(directory)>>> mlm_model = TFBertForMaskedLM.from_pretrained(directory)\n>\n> I hope this answers your question!\n>\n> โ\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/5421#issuecomment-717245807>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ACRE6KEEQACWSAEO3GK3CL3SM3DYNANCNFSM4OM5S2SQ>\n> .\n>\n",
"Hi @LysandreJik . I had a look at the official BERT repo . There are only 199 variables in the official model checkpoints. Which means, of 204 variables ( last 5 variables for MLM layer ) is initialised randomly. These variables are not a part of official checkpoints I think. ",
"> @ohmeow you're loading the `bert-base-cased` checkpoint (which is a checkpoint that was trained using a similar architecture to `BertForPreTraining`) in a `BertForSequenceClassification` model.\r\n> \r\n> This means that:\r\n> \r\n> * The layers that `BertForPreTraining` has, but `BertForSequenceClassification` does not have will be discarded\r\n> * The layers that `BertForSequenceClassification` has but `BertForPreTraining` does not have will be randomly initialized.\r\n> \r\n> This is expected, and tells you that you won't have good performance with your `BertForSequenceClassification` model before you fine-tune it ๐.\r\n> \r\n> @fliptrail this warning means that during your training, you're not using the `pooler` in order to compute the loss. I don't know how you're finetuning your model, but if you're not using the pooler layer then there's no need to worry about that warning.\r\n\r\nWhere does the random initialization of the missing parameters occur? I don't see any calls to `_init_weights`.",
"@rkunani - did you get answer to this? I am also facing the same issue....",
"@PremalMatalia I looked into it myself and found that the initialization of the `nn.Linear` layer on line 1469 [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py) is where the parameters are randomly initialized (see the `nn.Linear` [documentation](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html)). ",
"There is something wrong. There is nothing to be randomly initia;ized,\nunless it is a new layer out of architecture.\n\nOn Sun, Apr 4, 2021 at 5:02 AM Raguvir Kunani ***@***.***>\nwrote:\n\n> @PremalMatalia <https://github.com/PremalMatalia> I looked into it myself\n> and found that the initialization of the nn.Linear layer on line 1469 here\n> <https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py>\n> is where the parameters are randomly initialized (see the nn.Linear\n> documentation\n> <https://pytorch.org/docs/stable/generated/torch.nn.Linear.html>).\n>\n> โ\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/5421#issuecomment-812940724>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ACRE6KHJ4RZMPNOT6KWU7HTTG6QPDANCNFSM4OM5S2SQ>\n> .\n>\n"
] | 1,593 | 1,704 | 1,593 | CONTRIBUTOR | null | ```
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
```
returns this warning message:
```
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
This just started popping up with v.3 so I'm not sure what is the recommended action to take here. Please advise if you can. Basically, any of my code using the `AutoModelFor<X>` is throwing up this warning now.
Thanks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5421/reactions",
"total_count": 97,
"+1": 97,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5421/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.