url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/8925
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8925/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8925/comments
https://api.github.com/repos/huggingface/transformers/issues/8925/events
https://github.com/huggingface/transformers/pull/8925
757,242,112
MDExOlB1bGxSZXF1ZXN0NTMyNjUwNjE2
8,925
Fix TF T5 only encoder model with booleans
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,607
1,607
1,607
MEMBER
null
This model was not adapted to the new inputs processing.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8925/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8925/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8925", "html_url": "https://github.com/huggingface/transformers/pull/8925", "diff_url": "https://github.com/huggingface/transformers/pull/8925.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8925.patch", "merged_at": 1607102928000 }
https://api.github.com/repos/huggingface/transformers/issues/8924
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8924/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8924/comments
https://api.github.com/repos/huggingface/transformers/issues/8924/events
https://github.com/huggingface/transformers/pull/8924
757,175,536
MDExOlB1bGxSZXF1ZXN0NTMyNTk1Mjkw
8,924
Add new SQUAD example
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'd like the new examples scripts to stay fairly focused on one problem (at the cost of potentially have some duplicate codes) so they're easy to understand (and tweak) by users. We don't support any kind of datasets either (if your QA dataset has fields with names slightly different than SQUAD for instance), users are supposed to adapt the relevant lines in the code to their needs.\r\n\r\nSo with that in mind, I'd definitely prefer a separate script :-)" ]
1,607
1,607
1,607
COLLABORATOR
null
# What does this PR do? This PR adds a new example for SQUAD (v1 and v2) for simple models (e.g., not the XLNet/XLM more complex version, another example will follow for those) using the datasets library and all the features of the fast tokenizer to simplify considerably the preprocessing and the post-processing. I've compared the new version to the old one and did not find major differences when: - fine-tuning a model on SQUAD v1 or v2 with the old and new script - evaluation an existing model fine-tuned on SQUAD v1 or v2 with the old and new script The only difference I found was when evaluating an existing model fine-tuned on SQUAD v1 and evaluating it on SQUAD v2. For those, the new script is a bit less good at predicting the null answers (but those models have crappy results on SQUAD v2 anyway, they are just a bit more crappy). Further plans are: - add a subclass of Trainer for QA so that the evaluation is done directly with `trainer.evaluate()` - add a script for XLNet/XLM
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8924/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8924", "html_url": "https://github.com/huggingface/transformers/pull/8924", "diff_url": "https://github.com/huggingface/transformers/pull/8924.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8924.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8923
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8923/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8923/comments
https://api.github.com/repos/huggingface/transformers/issues/8923/events
https://github.com/huggingface/transformers/issues/8923
757,031,928
MDU6SXNzdWU3NTcwMzE5Mjg=
8,923
ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
{ "login": "Ki6an", "id": 63173962, "node_id": "MDQ6VXNlcjYzMTczOTYy", "avatar_url": "https://avatars.githubusercontent.com/u/63173962?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ki6an", "html_url": "https://github.com/Ki6an", "followers_url": "https://api.github.com/users/Ki6an/followers", "following_url": "https://api.github.com/users/Ki6an/following{/other_user}", "gists_url": "https://api.github.com/users/Ki6an/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ki6an/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ki6an/subscriptions", "organizations_url": "https://api.github.com/users/Ki6an/orgs", "repos_url": "https://api.github.com/users/Ki6an/repos", "events_url": "https://api.github.com/users/Ki6an/events{/privacy}", "received_events_url": "https://api.github.com/users/Ki6an/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten might be able to help", "Seq2Seq models are a bit special - they also need `decoder_input_ids` as the error message states. Since torchscript however does not allow keyword arguments we need to provide positional arguments and therefore it's mandatory to also provide the 2nd argument being the `attention_mask` (for the encoder).\r\n\r\nThe following is what you are looking for (I think):\r\n\r\n```python\r\nfrom transformers import T5Tokenizer, T5ForConditionalGeneration\r\nimport torch\r\n\r\ntokenizer = T5Tokenizer.from_pretrained('t5-small')\r\nmodel = T5ForConditionalGeneration.from_pretrained('t5-small', torchscript = True)\r\ninput_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids\r\nattention_mask = input_ids.ne(model.config.pad_token_id).long()\r\ndecoder_input_ids = tokenizer('<pad> <extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids\r\n\r\ntraced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids))\r\ntorch.jit.save(traced_model, \"traced_t5.pt\")\r\n```", "Thank you for the solution, your mentioned code works perfectly fine for creating a `torchscript `model. \r\nBut I have one more question, the generated `traced_t5.pt` model doesn't seem to have the `model.generate()` method. \r\nhow to get the `token ids` output from this newly created model? (only from model()). \r\nAlso, In General, how can we get the output (token ids) from a t5 model without using the `generate()` method?", "Yeah, I don't think our `generate()` method is torchscriptable yet :-/ You should take a look at the `greedy_search` method to see how the `generate()` method can be implemented by hand :-) \r\n\r\nGreedy search: https://github.com/huggingface/transformers/blob/df311a5ccf50be3031474e289b43b1be43111144/src/transformers/generation_utils.py#L622\r\n\r\nGenerate: \r\nhttps://github.com/huggingface/transformers/blob/df311a5ccf50be3031474e289b43b1be43111144/src/transformers/generation_utils.py#L296\r\n\r\n", "thank you, I'll look into it. ", "@Ki6an : Were you able to figure out how to make use of `greedy_search` to do the work which `generate` does? If so can I request you to share that as a gist?", "@karrtikiyerkcm have a look at [FastT5](https://github.com/Ki6an/fastT5) library, it implements both greedy and beam search for T5. ", "Thanks @Ki6an , I was trying something similar for Pegasus Models for the summarisation task.", "@Ki6an Hello, the input_id I inputed is 64*100 (Batch_size,max_sequence), why the size of \r\n T5ForConditionalGeneration.generate result is 100?where is the batch_size\r\n\r\n", "anyone know if it's still not possible to use torchscript with generate?", "@patrickvonplaten @Ki6an Hi, what should ```decoder_input_ids``` be if my input text is ```translate English to German: Thank you!```? I'm using this for inference. For decoder models like BERT and GPT, all I need to do is use Tokenizer to get the ```input_ids``` which will be passed into the models. But I'm not sure how that works for encoder-decoder models like T5 here.", "> @patrickvonplaten @Ki6an Hi, what should `decoder_input_ids` be if my input text is `translate English to German: Thank you!`? I'm using this for inference. For decoder models like BERT and GPT, all I need to do is use Tokenizer to get the `input_ids` which will be passed into the models. But I'm not sure how that works for encoder-decoder models like T5 here.\r\n\r\nHi, I wanted to follow up on this. I have the same question.", "> Seq2Seq models are a bit special - they also need `decoder_input_ids` as the error message states. Since torchscript however does not allow keyword arguments we need to provide positional arguments and therefore it's mandatory to also provide the 2nd argument being the `attention_mask` (for the encoder).\r\n> \r\n> The following is what you are looking for (I think):\r\n> \r\n> ```python\r\n> from transformers import T5Tokenizer, T5ForConditionalGeneration\r\n> import torch\r\n> \r\n> tokenizer = T5Tokenizer.from_pretrained('t5-small')\r\n> model = T5ForConditionalGeneration.from_pretrained('t5-small', torchscript = True)\r\n> input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids\r\n> attention_mask = input_ids.ne(model.config.pad_token_id).long()\r\n> decoder_input_ids = tokenizer('<pad> <extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids\r\n> \r\n> traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids))\r\n> torch.jit.save(traced_model, \"traced_t5.pt\")\r\n> ```\r\n\r\nHi can you please provide example of how to use this t5 jit traced model for inference..\r\nI tried using it but it requires decoder_input_ids.. is there any way of doing inference without the decoder_input_ids?" ]
1,607
1,688
1,611
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: **4.0.0** - Platform: **google colab** - Python version: 3 - PyTorch version (GPU?): 1.7.0+cu101 ### Who can help @patrickvonplaten @patil-suraj ## Information Model I am using (T5): The problem arises when using: ```python from transformers import T5Tokenizer, T5ForConditionalGeneration import sentencepiece tokenizer = T5Tokenizer.from_pretrained('t5-small') model = T5ForConditionalGeneration.from_pretrained('t5-small', torchscript = True) input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids labels = tokenizer('<extra_id_0> cute dog <extra_id_1> the <extra_id_2> </s>', return_tensors='pt').input_ids outputs = model(input_ids=input_ids, labels=labels) ``` ```input_ids = tokenizer("summarize: studies have shown that owning a dog is good for you ", return_tensors="pt").input_ids # Batch size 1 outputs = model.generate(input_ids) ``` ```import torch traced_model = torch.jit.trace(model, input_ids ) torch.jit.save(traced_model, "traced_t5.pt") ``` as mentioned in the [article ](https://huggingface.co/transformers/torchscript.html#saving-a-model) I tried to convert the model to `torchscript ` `T5ForConditionalGeneration` model is not supporting `trace` function for converting the model to `torchscript` the output produced : ```--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-7-e37c13fee7bc> in <module>() 1 import torch ----> 2 traced_model = torch.jit.trace(model, input_ids ) 3 torch.jit.save(traced_model, "traced_t5.pt") 7 frames /usr/local/lib/python3.6/dist-packages/transformers/models/t5/modeling_t5.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 774 else: 775 err_msg_prefix = "decoder_" if self.is_decoder else "" --> 776 raise ValueError(f"You have to specify either {err_msg_prefix}inputs or {err_msg_prefix}inputs_embeds") 777 778 if inputs_embeds is None: ValueError: You have to specify either decoder_inputs or decoder_inputs_embeds ``` I got the same issue when converting a question-generation T5 model to `torchscript`, and the issue is [here](https://github.com/patil-suraj/question_generation/issues/52)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8923/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8922
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8922/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8922/comments
https://api.github.com/repos/huggingface/transformers/issues/8922/events
https://github.com/huggingface/transformers/pull/8922
756,907,915
MDExOlB1bGxSZXF1ZXN0NTMyMzc0ODEy
8,922
Add comet
{ "login": "roy29fuku", "id": 18661965, "node_id": "MDQ6VXNlcjE4NjYxOTY1", "avatar_url": "https://avatars.githubusercontent.com/u/18661965?v=4", "gravatar_id": "", "url": "https://api.github.com/users/roy29fuku", "html_url": "https://github.com/roy29fuku", "followers_url": "https://api.github.com/users/roy29fuku/followers", "following_url": "https://api.github.com/users/roy29fuku/following{/other_user}", "gists_url": "https://api.github.com/users/roy29fuku/gists{/gist_id}", "starred_url": "https://api.github.com/users/roy29fuku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/roy29fuku/subscriptions", "organizations_url": "https://api.github.com/users/roy29fuku/orgs", "repos_url": "https://api.github.com/users/roy29fuku/repos", "events_url": "https://api.github.com/users/roy29fuku/events{/privacy}", "received_events_url": "https://api.github.com/users/roy29fuku/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi, what is this?", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,607
1,614
1,614
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8922/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8922", "html_url": "https://github.com/huggingface/transformers/pull/8922", "diff_url": "https://github.com/huggingface/transformers/pull/8922.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8922.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8921
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8921/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8921/comments
https://api.github.com/repos/huggingface/transformers/issues/8921/events
https://github.com/huggingface/transformers/issues/8921
756,892,221
MDU6SXNzdWU3NTY4OTIyMjE=
8,921
TransfoXL Slow Test Fails
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,607
1,619
1,619
MEMBER
null
This test needs to be fixed: ``` pytest -s tests/test_modeling_tf_transfo_xl.py::TFTransfoXLModelLanguageGenerationTest::test_lm_generate_transfo_xl_wt103 ``` @patrickvonplaten pinging myself. cc @jplu (for notice)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8921/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8921/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8920
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8920/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8920/comments
https://api.github.com/repos/huggingface/transformers/issues/8920/events
https://github.com/huggingface/transformers/pull/8920
756,592,695
MDExOlB1bGxSZXF1ZXN0NTMyMTE4MzYx
8,920
Patch model parallel test
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,607
1,607
1,607
MEMBER
null
Patches the model parallel tests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8920/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8920", "html_url": "https://github.com/huggingface/transformers/pull/8920", "diff_url": "https://github.com/huggingface/transformers/pull/8920.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8920.patch", "merged_at": 1607033747000 }
https://api.github.com/repos/huggingface/transformers/issues/8919
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8919/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8919/comments
https://api.github.com/repos/huggingface/transformers/issues/8919/events
https://github.com/huggingface/transformers/issues/8919
756,564,457
MDU6SXNzdWU3NTY1NjQ0NTc=
8,919
BertModel outputs string instead of tensor
{ "login": "miguelwon", "id": 7373193, "node_id": "MDQ6VXNlcjczNzMxOTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7373193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/miguelwon", "html_url": "https://github.com/miguelwon", "followers_url": "https://api.github.com/users/miguelwon/followers", "following_url": "https://api.github.com/users/miguelwon/following{/other_user}", "gists_url": "https://api.github.com/users/miguelwon/gists{/gist_id}", "starred_url": "https://api.github.com/users/miguelwon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miguelwon/subscriptions", "organizations_url": "https://api.github.com/users/miguelwon/orgs", "repos_url": "https://api.github.com/users/miguelwon/repos", "events_url": "https://api.github.com/users/miguelwon/events{/privacy}", "received_events_url": "https://api.github.com/users/miguelwon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! Indeed, model outputs cannot be unpacked this way. It is mentioned in the [documentation](https://huggingface.co/transformers/main_classes/output.html#transformers.file_utils.ModelOutput). You can retrieve the items by unpacking them like this if you use the `.to_tuple()` method.", "Oh, ok. Thanks and sorry for missing that. ", "No problem, happy to help!" ]
1,607
1,607
1,607
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.0.0 - Platform: Linux-4.15.0-46-generic-x86_64-with-Ubuntu-16.04-xenial - Python version: 3.6.7 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help Sorry, no idea. ## Information Model I am using (Bert, XLNet ...): BERT The problem arises when using: * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] my own task or dataset: (give details below) ## To reproduce ``` import transformers from transformers import BertModel, BertTokenizer PRE_TRAINED_MODEL_NAME = 'bert-base-cased' PATH_OF_CACHE = "some_path" tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME,cache_dir = PATH_OF_CACHE) sample_txt = 'When was I last outside? I am stuck at home for 2 weeks.' encoding_sample = tokenizer.encode_plus( sample_txt, max_length=32, add_special_tokens=True, # Add '[CLS]' and '[SEP]' return_token_type_ids=False, padding=True, truncation = True, return_attention_mask=True, return_tensors='pt', # Return PyTorch tensors ) bert_model = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME,cache_dir = PATH_OF_CACHE) last_hidden_state, pooled_output = bert_model( encoding_sample['input_ids'], encoding_sample['attention_mask'] ) print([last_hidden_state,pooled_output]) ``` I'm getting this very odd behaviour where the output are two strings named from the variables: ``` (env) mwon@sebruno2:~/data-mwon/paperChega/src_classificador$ python test.py ['last_hidden_state', 'pooler_output'] ``` ## Expected behavior I expected to output a tensor with the hidden state of the last layer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8919/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8918
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8918/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8918/comments
https://api.github.com/repos/huggingface/transformers/issues/8918/events
https://github.com/huggingface/transformers/pull/8918
756,474,643
MDExOlB1bGxSZXF1ZXN0NTMyMDE0NjA5
8,918
Put Transformers on Conda
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,607
1,607
1,607
MEMBER
null
Puts transformers on conda, on the `huggingface` channel. Installation can be done as: ``` conda install -c huggingface transformers ``` Will push a build on the channel on every new tag.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8918/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8918/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8918", "html_url": "https://github.com/huggingface/transformers/pull/8918", "diff_url": "https://github.com/huggingface/transformers/pull/8918.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8918.patch", "merged_at": 1607023730000 }
https://api.github.com/repos/huggingface/transformers/issues/8917
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8917/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8917/comments
https://api.github.com/repos/huggingface/transformers/issues/8917/events
https://github.com/huggingface/transformers/pull/8917
756,314,496
MDExOlB1bGxSZXF1ZXN0NTMxODc5MDg1
8,917
Fix move when the two cache folders exist
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,607
1,607
1,607
COLLABORATOR
null
# What does this PR do? When doing local checkouts of PRs that predate the cache move, we end up with the two cache folders existing and the automatic move fails. This PR fixes that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8917/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8917", "html_url": "https://github.com/huggingface/transformers/pull/8917", "diff_url": "https://github.com/huggingface/transformers/pull/8917.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8917.patch", "merged_at": 1607010614000 }
https://api.github.com/repos/huggingface/transformers/issues/8916
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8916/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8916/comments
https://api.github.com/repos/huggingface/transformers/issues/8916/events
https://github.com/huggingface/transformers/issues/8916
756,300,750
MDU6SXNzdWU3NTYzMDA3NTA=
8,916
Impossible to use sentencepiece
{ "login": "lematmat", "id": 19993147, "node_id": "MDQ6VXNlcjE5OTkzMTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/19993147?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lematmat", "html_url": "https://github.com/lematmat", "followers_url": "https://api.github.com/users/lematmat/followers", "following_url": "https://api.github.com/users/lematmat/following{/other_user}", "gists_url": "https://api.github.com/users/lematmat/gists{/gist_id}", "starred_url": "https://api.github.com/users/lematmat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lematmat/subscriptions", "organizations_url": "https://api.github.com/users/lematmat/orgs", "repos_url": "https://api.github.com/users/lematmat/repos", "events_url": "https://api.github.com/users/lematmat/events{/privacy}", "received_events_url": "https://api.github.com/users/lematmat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Have you tried restarting the kernel after installing `sentencepiece`?", "Yes, I did, with:\r\n!pip install --upgrade sentencepiece", "Is it possible for you to share your notebook so that I may take a look?", "I've just relaunched my notebook now, I don't have any issue now.\r\n\r\nThank you for your help\r\nRegards", "Glad it works for you now!" ]
1,607
1,607
1,607
NONE
null
Hi, I explicitly installed both the latest version of transformers (v4.0.0) and Sentencepiece (v0.1.84) as it is specified as it is specified in the release history: ![Capture d’écran 2020-12-03 à 16 12 29](https://user-images.githubusercontent.com/19993147/101047903-7fbc4e80-3582-11eb-9e72-ada4a1b55366.png) And the I I try to import MarianMT Tokeninzer I have the following issue: ![Capture d’écran 2020-12-03 à 16 16 41](https://user-images.githubusercontent.com/19993147/101048715-26a0ea80-3583-11eb-9448-568a9e1ebe60.png) So, any idea why I 'm getting that issue ? Best Regards, Leman
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8916/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8915
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8915/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8915/comments
https://api.github.com/repos/huggingface/transformers/issues/8915/events
https://github.com/huggingface/transformers/pull/8915
756,300,334
MDExOlB1bGxSZXF1ZXN0NTMxODY3MzA1
8,915
Avoid erasing the attention mask when double padding
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,607
1,607
1,607
COLLABORATOR
null
# What does this PR do? There is currently a bug when padding the same inputs twice: ``` >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") >>> inputs = tokenizer("Hello, my name is Sylvain!", padding="max_length", max_length=32) >>> print(inputs["attention_mask"]) [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] >>> inputs = tokenizer.pad(inputs, padding="max_length", max_length=32) >>> print(inputs["attention_mask"]) [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` This is done when using the `DataCollatorWithPadding` inside a `Trainer` (which is the default) when the samples have already been padded. This PR fixes that by honoring the current `attention_mask` when no padding is necessary.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8915/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8915", "html_url": "https://github.com/huggingface/transformers/pull/8915", "diff_url": "https://github.com/huggingface/transformers/pull/8915.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8915.patch", "merged_at": 1607010308000 }
https://api.github.com/repos/huggingface/transformers/issues/8914
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8914/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8914/comments
https://api.github.com/repos/huggingface/transformers/issues/8914/events
https://github.com/huggingface/transformers/pull/8914
756,203,573
MDExOlB1bGxSZXF1ZXN0NTMxNzg1NTQz
8,914
Tweak wording + Add badge w/ number of models on the hub
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "[Tooling comment] For some reason `python utils/check_copies.py --fix_and_overwrite` fails on my Python with following:\r\n```\r\n(.env) ipro:transformers gibbon$ python utils/check_copies.py --fix_and_overwrite\r\nTraceback (most recent call last):\r\n File \"utils/check_copies.py\", line 432, in <module>\r\n check_model_table(args.fix_and_overwrite)\r\n File \"utils/check_copies.py\", line 414, in check_model_table\r\n new_table = get_model_table_from_auto_modules()\r\n File \"utils/check_copies.py\", line 328, in get_model_table_from_auto_modules\r\n spec = importlib.util.spec_from_file_location(\r\nAttributeError: module 'importlib' has no attribute 'util'\r\n\r\n```", "I think it's better at the top rather than the end of the list (I don't think a user will read to until the end of the list TBH). We could even put it further up the README!" ]
1,607
1,607
1,607
MEMBER
null
it seemed pertinent to display this here but maybe we can also add it to some other places
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8914/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8914", "html_url": "https://github.com/huggingface/transformers/pull/8914", "diff_url": "https://github.com/huggingface/transformers/pull/8914.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8914.patch", "merged_at": 1607011015000 }
https://api.github.com/repos/huggingface/transformers/issues/8913
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8913/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8913/comments
https://api.github.com/repos/huggingface/transformers/issues/8913/events
https://github.com/huggingface/transformers/issues/8913
756,088,403
MDU6SXNzdWU3NTYwODg0MDM=
8,913
Fine-tune with custom data
{ "login": "whoafridi", "id": 35966401, "node_id": "MDQ6VXNlcjM1OTY2NDAx", "avatar_url": "https://avatars.githubusercontent.com/u/35966401?v=4", "gravatar_id": "", "url": "https://api.github.com/users/whoafridi", "html_url": "https://github.com/whoafridi", "followers_url": "https://api.github.com/users/whoafridi/followers", "following_url": "https://api.github.com/users/whoafridi/following{/other_user}", "gists_url": "https://api.github.com/users/whoafridi/gists{/gist_id}", "starred_url": "https://api.github.com/users/whoafridi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/whoafridi/subscriptions", "organizations_url": "https://api.github.com/users/whoafridi/orgs", "repos_url": "https://api.github.com/users/whoafridi/repos", "events_url": "https://api.github.com/users/whoafridi/events{/privacy}", "received_events_url": "https://api.github.com/users/whoafridi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`run_squad.py` is more complete right now, as `run_squad_trainer.py` can't do evaluation (yet! it will be possible in a few days).\r\n\r\nWe try to keep the github issues for bugs/feature requests.\r\nFor next time, could you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!", "Okay sure. I don't know about the forum - sorry for that. Thank you @LysandreJik " ]
1,606
1,607
1,607
NONE
null
1. What is the difference between `run_squad.py` & `run_squad_trainer.py` ? I've squad like dataset. 2. What script I used for fine-tuning with my own dataset?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8913/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8913/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8912
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8912/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8912/comments
https://api.github.com/repos/huggingface/transformers/issues/8912/events
https://github.com/huggingface/transformers/pull/8912
755,946,972
MDExOlB1bGxSZXF1ZXN0NTMxNTczMDc0
8,912
Create README.md
{ "login": "Quangtruong1999", "id": 62788094, "node_id": "MDQ6VXNlcjYyNzg4MDk0", "avatar_url": "https://avatars.githubusercontent.com/u/62788094?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Quangtruong1999", "html_url": "https://github.com/Quangtruong1999", "followers_url": "https://api.github.com/users/Quangtruong1999/followers", "following_url": "https://api.github.com/users/Quangtruong1999/following{/other_user}", "gists_url": "https://api.github.com/users/Quangtruong1999/gists{/gist_id}", "starred_url": "https://api.github.com/users/Quangtruong1999/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Quangtruong1999/subscriptions", "organizations_url": "https://api.github.com/users/Quangtruong1999/orgs", "repos_url": "https://api.github.com/users/Quangtruong1999/repos", "events_url": "https://api.github.com/users/Quangtruong1999/events{/privacy}", "received_events_url": "https://api.github.com/users/Quangtruong1999/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "This doesn't seem correct" ]
1,606
1,607
1,607
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8912/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8912", "html_url": "https://github.com/huggingface/transformers/pull/8912", "diff_url": "https://github.com/huggingface/transformers/pull/8912.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8912.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8911
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8911/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8911/comments
https://api.github.com/repos/huggingface/transformers/issues/8911/events
https://github.com/huggingface/transformers/issues/8911
755,716,404
MDU6SXNzdWU3NTU3MTY0MDQ=
8,911
Help to run an Example Code (it's a bug maybe ?)
{ "login": "Sourciluss667", "id": 45699766, "node_id": "MDQ6VXNlcjQ1Njk5NzY2", "avatar_url": "https://avatars.githubusercontent.com/u/45699766?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sourciluss667", "html_url": "https://github.com/Sourciluss667", "followers_url": "https://api.github.com/users/Sourciluss667/followers", "following_url": "https://api.github.com/users/Sourciluss667/following{/other_user}", "gists_url": "https://api.github.com/users/Sourciluss667/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sourciluss667/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sourciluss667/subscriptions", "organizations_url": "https://api.github.com/users/Sourciluss667/orgs", "repos_url": "https://api.github.com/users/Sourciluss667/repos", "events_url": "https://api.github.com/users/Sourciluss667/events{/privacy}", "received_events_url": "https://api.github.com/users/Sourciluss667/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "Hmmm, this is weird, I can't reproduce with a very similar environment:\r\n\r\n```py\r\n- `transformers` version: 4.0.0\r\n- Platform: Linux-5.9.11-arch2-1-x86_64-with-glibc2.10\r\n- Python version: 3.8.3\r\n- PyTorch version (GPU?): 1.7.0 (True)\r\n- Tensorflow version (GPU?): 2.3.1 (False)\r\n```\r\n\r\nIt outputs the following:\r\n\r\n```\r\n{'score': 0.26648467779159546, 'start': 90, 'end': 106, 'answer': 'peintre français'}\r\n```\r\n\r\nDiffering element here seems to be Windows vs Linux. Do you mind trying to install the library from `master` and telling me if that works?\r\n\r\nYou can try it with:\r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers\r\n```", "For what it's worth, I ran into the same issue using a GitHub Actions workflow with the following environment:\r\n\r\n```python\r\n- `transformers` version: 4.0.0\r\n- Platform: Windows Server 2019\r\n- Python version: 3.6.8\r\n- PyTorch version (GPU?): 1.6.0 (No)\r\n```\r\n\r\nI modified the workflow to run against the master branch and the issue appears to be resolved there.", "Yes this command solve my issue, thank you :)" ]
1,606
1,607
1,607
NONE
null
## Environment info - `transformers` version: 4.0.0 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.8.6 - PyTorch version (GPU?): 1.7.0+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help I don't know :( ## Information Model I am using (Bert, XLNet ...): fmikaelian/camembert-base-fquad The problem arises when using: * [x] the official example scripts: (give details below) When using the default example script i get this error : ``` Traceback (most recent call last): File ".\test.py", line 5, in <module> nlp({ File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\pipelines.py", line 1874, in __call__ start, end = self.model(**fw_args)[:2] File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\roberta\modeling_roberta.py", line 1286, in forward outputs = self.roberta( File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\roberta\modeling_roberta.py", line 687, in forward embedding_output = self.embeddings( File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\models\roberta\modeling_roberta.py", line 117, in forward inputs_embeds = self.word_embeddings(input_ids) File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\sparse.py", line 124, in forward return F.embedding( File "C:\Users\WTFAn\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\functional.py", line 1852, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.IntTensor instead (while checking arguments for embedding) ``` ## Expected behavior I don't know where I forgot something, but this example code give me this error and I don't know how to resolve this, because the error come from the lib. If someone can help me :) ``` from transformers import pipeline nlp = pipeline('question-answering', model='fmikaelian/camembert-base-fquad', tokenizer='fmikaelian/camembert-base-fquad') nlp({ 'question': "Qui est Claude Monet?", 'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme." }) ``` Edit : I have this issue with all 'question-answering' pipeline
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8911/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8911/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8910
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8910/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8910/comments
https://api.github.com/repos/huggingface/transformers/issues/8910/events
https://github.com/huggingface/transformers/issues/8910
755,642,714
MDU6SXNzdWU3NTU2NDI3MTQ=
8,910
"No log" when training RobertaForSequenceClassification using Trainer
{ "login": "BryanWBear", "id": 7650109, "node_id": "MDQ6VXNlcjc2NTAxMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/7650109?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BryanWBear", "html_url": "https://github.com/BryanWBear", "followers_url": "https://api.github.com/users/BryanWBear/followers", "following_url": "https://api.github.com/users/BryanWBear/following{/other_user}", "gists_url": "https://api.github.com/users/BryanWBear/gists{/gist_id}", "starred_url": "https://api.github.com/users/BryanWBear/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BryanWBear/subscriptions", "organizations_url": "https://api.github.com/users/BryanWBear/orgs", "repos_url": "https://api.github.com/users/BryanWBear/repos", "events_url": "https://api.github.com/users/BryanWBear/events{/privacy}", "received_events_url": "https://api.github.com/users/BryanWBear/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you provide the environment information as mentioned in the issue template, alongside the a reproducible that outputs this so that we may check what's going on? Thank you.", "Hi @BryanWBear , \r\n\r\nI am facing this issue too. In the meantime, did you find a solution?\r\n\r\nThank you so much in advance!", "the default `logging_steps` in `TrainingArguments` is set to `500` steps, so no loss is reported before 500 steps", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
NONE
null
When training, for the first few logging steps I get "No log". Looks like this: Step | Training Loss | Validation Loss | Accuracy | F1 -- | -- | -- | -- | -- 150 | No log | 0.695841 | 0.503277 | 0.410575 300 | No log | 0.696622 | 0.488860 | 0.298561 450 | No log | 0.694300 | 0.499345 | 0.356902 What does this mean? My classifier is performing poorly and I am wondering if this is related. I am finetuning roberta-base using 3k question-answer pairs, 50% positively labelled, 50% negatively labelled. Thanks, Bryan
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8910/reactions", "total_count": 24, "+1": 22, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8910/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8909
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8909/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8909/comments
https://api.github.com/repos/huggingface/transformers/issues/8909/events
https://github.com/huggingface/transformers/issues/8909
755,622,179
MDU6SXNzdWU3NTU2MjIxNzk=
8,909
FlaxBertModel examples (and fast attention)
{ "login": "StefanoSalvatori", "id": 38183486, "node_id": "MDQ6VXNlcjM4MTgzNDg2", "avatar_url": "https://avatars.githubusercontent.com/u/38183486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StefanoSalvatori", "html_url": "https://github.com/StefanoSalvatori", "followers_url": "https://api.github.com/users/StefanoSalvatori/followers", "following_url": "https://api.github.com/users/StefanoSalvatori/following{/other_user}", "gists_url": "https://api.github.com/users/StefanoSalvatori/gists{/gist_id}", "starred_url": "https://api.github.com/users/StefanoSalvatori/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StefanoSalvatori/subscriptions", "organizations_url": "https://api.github.com/users/StefanoSalvatori/orgs", "repos_url": "https://api.github.com/users/StefanoSalvatori/repos", "events_url": "https://api.github.com/users/StefanoSalvatori/events{/privacy}", "received_events_url": "https://api.github.com/users/StefanoSalvatori/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It should work - we'll run some experiments soon :-) https://github.com/huggingface/transformers/pull/8358 @TevenLeScao", "Ok, great!\r\n\r\nFor the moment i'm running some experiments myself with FlaxBertModel but i'm getting unexpected behaviors: it seems that the flax implementation is slower than the torch one; I tried to run this simple code in google colab with GPU environment \r\n\r\n```python\r\n!pip install --upgrade pip\r\n!pip install --upgrade jax jaxlib==0.1.57+cuda101 -f https://storage.googleapis.com/jax-releases/jax_releases.html\r\n!pip install flax\r\n!pip install transformers\r\n\r\nimport flax\r\nimport jax\r\nfrom transformers import BertModel\r\nfrom transformers import FlaxBertModel\r\nfrom transformers import AutoTokenizer\r\nimport time\r\n\r\nbaseModel ='bert-base-uncased'\r\ntorchBert = BertModel.from_pretrained(baseModel)\r\nflaxBert = FlaxBertModel.from_pretrained(baseModel)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(baseModel)\r\ntorchInput = tokenizer.encode(\"Random sentence\", truncation=True, padding=True, return_tensors='pt')\r\nflaxInput = tokenizer.encode(\"Random sentence\", truncation=True, padding=True, return_tensors='jax')\r\n\r\nstart_time = time.time()\r\nfor _ in range(10):\r\n torchBert(torchInput)\r\nprint(\"torch - \", time.time()-start_time)\r\n\r\nstart_time = time.time()\r\nfor _ in range(10):\r\n flaxBert(flaxInput)\r\nprint(\"flax - \", time.time()-start_time)\r\n```\r\n\r\nand i'm getting the following output\r\n\r\n```\r\ntorch - 0.6615538597106934\r\nflax - 5.129613161087036\r\n```\r\n\r\nWhat am i missing?", "You should probably use `jax.jit` to speed it up", "Indeed, as @patrickvonplaten says, using `jax.jit` on `flaxBert` will speed things up considerably. This will first shape-compile the function using XLA, and every time you call the function again (provided the shapes are the same), it will run the compiled version directly. I've demonstrated in this Colab: https://colab.research.google.com/drive/1davNsnV34KDZOyJ9i8zZfvxAVjBJC4dp?usp=sharing\r\n\r\nMake sure you set the accelerator to GPU/TPU! (Runtime -> Change runtime type)\r\n\r\nHere's a summary:\r\n\r\n```\r\n>>> %timeit torchBert(torchInput)\r\n1 loop, best of 5: 75.3 ms per loop\r\n>>> %timeit flaxBert(flaxInput)\r\n1 loop, best of 5: 1.41 s per loop\r\n>>> %timeit jitted_flax(flaxInput)\r\n100 loops, best of 5: 11.2 ms per loop\r\n```\r\n\r\nNote that this excluded the compilation time for the first time we called `jitted_flax`. Including this will increase the overall execution time, but since it has to be done only once this is negligible as you execute this function more often.\r\n\r\nTo learn more about JAX's jit, this quickstart is quite useful: https://jax.readthedocs.io/en/latest/notebooks/quickstart.html", "Thank you for you comment @marcvanzee. Indeed after @patrickvonplaten's reply I checked Flax and Jax documentation more carefully confirming that JIT compilation could solve performance issues. It still surprises me though that pytorch is quite fast even without JIT compilation while the same is not true for Flax. Frankly i didn't even know that JIT existed in pytorch so i'd be curious too to se how it compares to Flax.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,621
1,621
NONE
null
Are there any examples that show how to use the FlaxBertModel? Would it be possible to replace the current SelfAttention module with the one proposed here https://github.com/google-research/google-research/tree/master/performer/fast_self_attention?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8909/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8908
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8908/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8908/comments
https://api.github.com/repos/huggingface/transformers/issues/8908/events
https://github.com/huggingface/transformers/issues/8908
755,566,351
MDU6SXNzdWU3NTU1NjYzNTE=
8,908
Question: What's the difference between tokenizer_utils, tokenizer_utils_base & tokenizer_utils_fast
{ "login": "BrandonLiang", "id": 12600264, "node_id": "MDQ6VXNlcjEyNjAwMjY0", "avatar_url": "https://avatars.githubusercontent.com/u/12600264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BrandonLiang", "html_url": "https://github.com/BrandonLiang", "followers_url": "https://api.github.com/users/BrandonLiang/followers", "following_url": "https://api.github.com/users/BrandonLiang/following{/other_user}", "gists_url": "https://api.github.com/users/BrandonLiang/gists{/gist_id}", "starred_url": "https://api.github.com/users/BrandonLiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BrandonLiang/subscriptions", "organizations_url": "https://api.github.com/users/BrandonLiang/orgs", "repos_url": "https://api.github.com/users/BrandonLiang/repos", "events_url": "https://api.github.com/users/BrandonLiang/events{/privacy}", "received_events_url": "https://api.github.com/users/BrandonLiang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
NONE
null
As titled. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8908/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8908/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8907
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8907/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8907/comments
https://api.github.com/repos/huggingface/transformers/issues/8907/events
https://github.com/huggingface/transformers/issues/8907
755,458,216
MDU6SXNzdWU3NTU0NTgyMTY=
8,907
Unexpected situation when freezing BertForMaskedLM
{ "login": "jaimeenahn", "id": 32367255, "node_id": "MDQ6VXNlcjMyMzY3MjU1", "avatar_url": "https://avatars.githubusercontent.com/u/32367255?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaimeenahn", "html_url": "https://github.com/jaimeenahn", "followers_url": "https://api.github.com/users/jaimeenahn/followers", "following_url": "https://api.github.com/users/jaimeenahn/following{/other_user}", "gists_url": "https://api.github.com/users/jaimeenahn/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaimeenahn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaimeenahn/subscriptions", "organizations_url": "https://api.github.com/users/jaimeenahn/orgs", "repos_url": "https://api.github.com/users/jaimeenahn/repos", "events_url": "https://api.github.com/users/jaimeenahn/events{/privacy}", "received_events_url": "https://api.github.com/users/jaimeenahn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think this is probably so because the `cls.predictions.decoder` is a linear layer, which is tied to the embeddings layer. They're pointing to the same weights, so freezeing one of those would result in freezing the other one." ]
1,606
1,614
1,614
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Linux-4.18.0-147.el8.x86_64-x86_64-with-Ubuntu-16.04-xenial - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes and No (It happens in both conditions) - Using distributed or parallel set-up in script?: Yes and No (It happens in both conditions) ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @LysandreJik ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Load pretrained BertForMaskedLM ``` from transformers import BertForMaskedLM model = BertForMaskedLM.from_pretrained('bert-base-uncased') ``` 2. Check whether gradients in the cls.predictions.decoder layer are calculated ``` print(model.cls.predictions.decoder.weight.requires_grad) ``` Result: ``` True ``` 3. Check again after only freezing the bert layer ``` for param in model.bert.parameters(): param.requires_grad = False print(model.cls.predictions.decoder.weight.requires_grad) ``` Result: ``` False ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> It only happens only on BertForMaskedLM. If I tried to freeze only the BertModel, cls.predictions.decoder is also frozen. But as expected, cls.prediction.transform is not frozen. The exception only occurs in cls.predictions.decoder . I don't know it is the way you expected but in my sense, it is a kind of unexpected situation for the ones who try to freeze only the BertModel.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8907/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8906
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8906/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8906/comments
https://api.github.com/repos/huggingface/transformers/issues/8906/events
https://github.com/huggingface/transformers/pull/8906
755,452,061
MDExOlB1bGxSZXF1ZXN0NTMxMTU5NjQy
8,906
Corrected a typo in the ReadMe
{ "login": "devangi2000", "id": 54393816, "node_id": "MDQ6VXNlcjU0MzkzODE2", "avatar_url": "https://avatars.githubusercontent.com/u/54393816?v=4", "gravatar_id": "", "url": "https://api.github.com/users/devangi2000", "html_url": "https://github.com/devangi2000", "followers_url": "https://api.github.com/users/devangi2000/followers", "following_url": "https://api.github.com/users/devangi2000/following{/other_user}", "gists_url": "https://api.github.com/users/devangi2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/devangi2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/devangi2000/subscriptions", "organizations_url": "https://api.github.com/users/devangi2000/orgs", "repos_url": "https://api.github.com/users/devangi2000/repos", "events_url": "https://api.github.com/users/devangi2000/events{/privacy}", "received_events_url": "https://api.github.com/users/devangi2000/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you very much for this correction, @devangi2000 " ]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8906/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8906", "html_url": "https://github.com/huggingface/transformers/pull/8906", "diff_url": "https://github.com/huggingface/transformers/pull/8906.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8906.patch", "merged_at": 1606930124000 }
https://api.github.com/repos/huggingface/transformers/issues/8905
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8905/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8905/comments
https://api.github.com/repos/huggingface/transformers/issues/8905/events
https://github.com/huggingface/transformers/pull/8905
755,450,513
MDExOlB1bGxSZXF1ZXN0NTMxMTU4NDAy
8,905
Fix typo in docstring in src/transformers/models/bert_japanese/tokenization_bert_japanese.py
{ "login": "ryota-mo", "id": 40747105, "node_id": "MDQ6VXNlcjQwNzQ3MTA1", "avatar_url": "https://avatars.githubusercontent.com/u/40747105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ryota-mo", "html_url": "https://github.com/ryota-mo", "followers_url": "https://api.github.com/users/ryota-mo/followers", "following_url": "https://api.github.com/users/ryota-mo/following{/other_user}", "gists_url": "https://api.github.com/users/ryota-mo/gists{/gist_id}", "starred_url": "https://api.github.com/users/ryota-mo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ryota-mo/subscriptions", "organizations_url": "https://api.github.com/users/ryota-mo/orgs", "repos_url": "https://api.github.com/users/ryota-mo/repos", "events_url": "https://api.github.com/users/ryota-mo/events{/privacy}", "received_events_url": "https://api.github.com/users/ryota-mo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? Only fix typo (thi -> this). ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8905/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8905", "html_url": "https://github.com/huggingface/transformers/pull/8905", "diff_url": "https://github.com/huggingface/transformers/pull/8905.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8905.patch", "merged_at": 1606928912000 }
https://api.github.com/repos/huggingface/transformers/issues/8904
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8904/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8904/comments
https://api.github.com/repos/huggingface/transformers/issues/8904/events
https://github.com/huggingface/transformers/issues/8904
755,445,813
MDU6SXNzdWU3NTU0NDU4MTM=
8,904
Using doc chunks without answer token during training ( BertForQuestionAnswering )
{ "login": "nikoletta-toth", "id": 65224954, "node_id": "MDQ6VXNlcjY1MjI0OTU0", "avatar_url": "https://avatars.githubusercontent.com/u/65224954?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nikoletta-toth", "html_url": "https://github.com/nikoletta-toth", "followers_url": "https://api.github.com/users/nikoletta-toth/followers", "following_url": "https://api.github.com/users/nikoletta-toth/following{/other_user}", "gists_url": "https://api.github.com/users/nikoletta-toth/gists{/gist_id}", "starred_url": "https://api.github.com/users/nikoletta-toth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nikoletta-toth/subscriptions", "organizations_url": "https://api.github.com/users/nikoletta-toth/orgs", "repos_url": "https://api.github.com/users/nikoletta-toth/repos", "events_url": "https://api.github.com/users/nikoletta-toth/events{/privacy}", "received_events_url": "https://api.github.com/users/nikoletta-toth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger you might be interested in that issue given that you're refactoring the squad example!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
NONE
null
Hi! When creating features from the squad examples, you use sliding window approach to generate doc chunks, when the input data is too long. There you state, **_if the document chunk does not contain an annotation, you throw it out, since there is nothing to predict._** Hence you set the start_position=0 and end_position=0. Then you re-set it to cls_index in training mode. When you do **CLS token at the beginning, then your cls_index = 0**. Finishing up, you add this item to the InputFeatures. https://github.com/huggingface/transformers/blob/e768f2322abd2a2f60a3a6d64a6a94c2d957fe89/examples/utils_squad.py#L332-L351 During training - forward step, you calculate loss on these items: Let's say the max seq length is 512, then the ignored_index = 512, meaning you only ignore those start/end positions which are >= 512. With cls_index = 0 we have start_position=0 and end_position=0. So we end up getting a loss calculated using the start_position with start_logits and end_position with end_logits. https://github.com/huggingface/transformers/blob/a8c3f9aa760ed7b516ee00f602e8efc0e5d80285/src/transformers/models/bert/modeling_bert.py#L1651-L1665 Then **you return with the calculated loss and in run_squad.py you add this new loss to the training loss**: https://github.com/huggingface/transformers/blob/a8c3f9aa760ed7b516ee00f602e8efc0e5d80285/examples/question-answering/run_squad.py#L219 **So actually you do not throw these chunks out, but use them for training.** **Solution maybe:** Why not set the start_position = end_position = **max_seq_length** instead of cls_index in the utils_squad.py, making sure they will be ignored during training and the loss calculated with them will be 0 ?? Hope you will understand my point, let me know what you think! **Update:** I got this idea from another repo using bits and pieces from your implementation (they only used labels with answer, so squad-v1 like dataset) : In case of too long sequences the sliding window approach creates multiple doc chunks, some of those are without an annotated answer. The model learns to predict start and end positions as 0 (if CLS is on 0. pos) when the answer is NOT present in the document chunk. This will help reduce false predictions on test data where documents are too long and split into multiple chunks. **So either you actually want to use those doc chunks with no annotation, but your comments are misleading, OR you don't want to use those chunks, hence the comment, but you failed to implement it.** Hope it helps!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8904/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8903
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8903/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8903/comments
https://api.github.com/repos/huggingface/transformers/issues/8903/events
https://github.com/huggingface/transformers/pull/8903
755,425,901
MDExOlB1bGxSZXF1ZXN0NTMxMTM4MzY3
8,903
[trainer] improve code readability
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
CONTRIBUTOR
null
This PR: * removes redundant code, as: ``` self.model = model if model is not None else None ``` and ``` self.model = model ``` are the same. * decouples attribute assignment from code logic - which simplifies things further. @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8903/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8903", "html_url": "https://github.com/huggingface/transformers/pull/8903", "diff_url": "https://github.com/huggingface/transformers/pull/8903.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8903.patch", "merged_at": 1606928863000 }
https://api.github.com/repos/huggingface/transformers/issues/8902
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8902/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8902/comments
https://api.github.com/repos/huggingface/transformers/issues/8902/events
https://github.com/huggingface/transformers/pull/8902
755,332,919
MDExOlB1bGxSZXF1ZXN0NTMxMDYzNjU1
8,902
fix(pipeline): error when model not in AutoModel
{ "login": "voidful", "id": 10904842, "node_id": "MDQ6VXNlcjEwOTA0ODQy", "avatar_url": "https://avatars.githubusercontent.com/u/10904842?v=4", "gravatar_id": "", "url": "https://api.github.com/users/voidful", "html_url": "https://github.com/voidful", "followers_url": "https://api.github.com/users/voidful/followers", "following_url": "https://api.github.com/users/voidful/following{/other_user}", "gists_url": "https://api.github.com/users/voidful/gists{/gist_id}", "starred_url": "https://api.github.com/users/voidful/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/voidful/subscriptions", "organizations_url": "https://api.github.com/users/voidful/orgs", "repos_url": "https://api.github.com/users/voidful/repos", "events_url": "https://api.github.com/users/voidful/events{/privacy}", "received_events_url": "https://api.github.com/users/voidful/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? During pipeline initialize, it will call get_framework to check whether tf or pt model. get_framework will call AutoModel to load model and return where it depends on. However, not all the models are in AutoModel. For example, `Helsinki-NLP/opus-mt-en-fr` is under AutoModelForSeq2SeqLM which will cause an error when calling pipeline. get_framework should depend on task instead. This fix pass targeted_task to get_framework, if the model not in AutoModel, it will use the targeted_task model instead. error example ``` from transformers import pipeline model = pipeline(task="translation_en_to_fr",model="Helsinki-NLP/opus-mt-en-fr") ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8902/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8902", "html_url": "https://github.com/huggingface/transformers/pull/8902", "diff_url": "https://github.com/huggingface/transformers/pull/8902.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8902.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8901
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8901/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8901/comments
https://api.github.com/repos/huggingface/transformers/issues/8901/events
https://github.com/huggingface/transformers/issues/8901
755,298,871
MDU6SXNzdWU3NTUyOTg4NzE=
8,901
Removing Head Layer/Model Conversion
{ "login": "pugantsov", "id": 16597333, "node_id": "MDQ6VXNlcjE2NTk3MzMz", "avatar_url": "https://avatars.githubusercontent.com/u/16597333?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pugantsov", "html_url": "https://github.com/pugantsov", "followers_url": "https://api.github.com/users/pugantsov/followers", "following_url": "https://api.github.com/users/pugantsov/following{/other_user}", "gists_url": "https://api.github.com/users/pugantsov/gists{/gist_id}", "starred_url": "https://api.github.com/users/pugantsov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pugantsov/subscriptions", "organizations_url": "https://api.github.com/users/pugantsov/orgs", "repos_url": "https://api.github.com/users/pugantsov/repos", "events_url": "https://api.github.com/users/pugantsov/events{/privacy}", "received_events_url": "https://api.github.com/users/pugantsov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "A model trained for sequence classification can definitely be loaded with a different head. Here's an example:\r\n\r\n```py\r\nfrom transformers import DistilBertForSequenceClassification, DistilBertForMaskedLM\r\nsequence_classifier = DistilBertForSequenceClassification.from_pretrained(\"...\")\r\n# Do stuff with your model, train it, do what you like\r\n\r\n# Save the weights in a local directory\r\nsequence_classifier.save_pretrained(\"model-trained-on-xxx\")\r\n\r\n# Load the weights in the *ForMaskedLM model.\r\nlanguage_model = DistilBertForMaskedLM.from_pretrained(\"model-trained-on-xxx\")\r\n```\r\nThis `language_model` has kept all the weights of the base transformer model, has discarded the sequence classification layers, and has randomly initialized the new layers. This model can be loaded in an `AutoModelWithLMHead`.", "Oh wow, did not expect it to be this easy. Thanks very much!" ]
1,606
1,606
1,606
NONE
null
I am currently working on some research in which I am to delve into the analysis of decision boundaries in text classification tasks and I am aiming to use recent work from the `ExBERT` paper, allowing me to visualise the importance of particular features across sentences. Since the library is built on top of models from the Transformers library and requires that *The model architecture must be supported by the `AutoModelWithLMHead`*, I was wondering if it was possible to modify a fine-tuned model to work with that architecture. I am currently using `DistilBERTForSequenceClassification` in my pipeline and was wondering if it were possible to essentially fine-tune for a classification task and use the underlying `DistilBERT` model, as I assume all of the attention weights etc. will still be included in the model? ie. Could I change the loaded model after training to work with the library and to work with the `AutoModelWithLMHead` architecture so that I could inspect the attention heads? I wasn't sure if I was only able to use models trained for Masked LM or if I could use models trained for downstream tasks? Apologies if this is a question best for the ExBERT github but since it was built into the library, thought I'd ask.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8901/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8900
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8900/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8900/comments
https://api.github.com/repos/huggingface/transformers/issues/8900/events
https://github.com/huggingface/transformers/pull/8900
755,266,635
MDExOlB1bGxSZXF1ZXN0NTMxMDEwNzQ0
8,900
[Bart] Refactor - fix issues, consistency with the library, naming
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Try the various Marian fine-tuning scripts. You should easily be able to get 22+ BLEU on wmt-en-ro with both `finetune_trainer.py` and `finetune.py` in < 30 minutes on brutasse.", "Speed / Memory benchmark of master vs. this PR is ok for me:\r\n\r\n![Screenshot from 2020-12-08 10-12-43](https://user-images.githubusercontent.com/23423619/101463805-2d8e7b00-393e-11eb-832f-114982569829.png)\r\n", "@patrickvonplaten I would measure impact of running cnn summarization (which uses seq_len 1024) to fully acknowledge the trade off you are making:\r\n```bash\r\ncd examples/seq2seq\r\npython run_eval.py facebook/bart-large-cnn cnn_dm/test.source cnn_gens.txt \\\r\n\t--reference_path cnn_dm/test.target \\\r\n\t--score_path cnn_rouge.json --task summarization \\\r\n\t--n_obs 500 --fp16\r\n```\r\nThis should take ~5 mins per branch.\r\n\r\nOtherwise, LGTM! Thanks for cleaning up after me :)", "> @patrickvonplaten I would measure impact of running cnn summarization (which uses seq_len 1024) to fully acknowledge the trade off you are making:\r\n> \r\n> ```shell\r\n> cd examples/seq2seq\r\n> python run_eval.py facebook/bart-large-cnn cnn_dm/test.source cnn_gens.txt \\\r\n> \t--reference_path cnn_dm/test.target \\\r\n> \t--score_path cnn_rouge.json --task summarization \\\r\n> \t--n_obs 500 --fp16\r\n> ```\r\n> \r\n> This should take ~5 mins per branch.\r\n> \r\n> Otherwise, LGTM! Thanks for cleaning up after me :)\r\n\r\n\r\n\r\n> @patrickvonplaten I would measure impact of running cnn summarization (which uses seq_len 1024) to fully acknowledge the trade off you are making:\r\n> \r\n> ```shell\r\n> cd examples/seq2seq\r\n> python run_eval.py facebook/bart-large-cnn cnn_dm/test.source cnn_gens.txt \\\r\n> \t--reference_path cnn_dm/test.target \\\r\n> \t--score_path cnn_rouge.json --task summarization \\\r\n> \t--n_obs 500 --fp16\r\n> ```\r\n> \r\n> This should take ~5 mins per branch.\r\n> \r\n> Otherwise, LGTM! Thanks for cleaning up after me :)\r\n\r\nThanks for the command! What do you mean by \"per branch\"? ", "Also @sshleifer I didn't really manage to find a good marian command for fine-tuning. Can you by chance copy-paste a command that fine-tunes a marian model in ~30min to verify that fine-tuning works as expected?", "> @patrickvonplaten I would measure impact of running cnn summarization (which uses seq_len 1024) to fully acknowledge the trade off you are making:\r\n> \r\n> ```shell\r\n> cd examples/seq2seq\r\n> python run_eval.py facebook/bart-large-cnn cnn_dm/test.source cnn_gens.txt \\\r\n> \t--reference_path cnn_dm/test.target \\\r\n> \t--score_path cnn_rouge.json --task summarization \\\r\n> \t--n_obs 500 --fp16\r\n> ```\r\n> \r\n> This should take ~5 mins per branch.\r\n> \r\n> Otherwise, LGTM! Thanks for cleaning up after me :)\r\n\r\nI got this result:\r\n\r\n![Screenshot from 2020-12-08 18-56-50](https://user-images.githubusercontent.com/23423619/101522361-61da5980-3987-11eb-9531-cfcfd85000cf.png)\r\n\r\n\r\non brutasse - does this look reasonable to you? took 2min30 ", "What I meant by \"per branch\" was to also run that command on master to facilitate comparison. Your `refactor-bart` output looks completely reasonable.\r\n\r\n#### Train Command\r\nreplace num_train_epochs=1 in this [./examples/seq2seq/builtin_trainer/train_distil_marian_enro.sh](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/builtin_trainer/train_distil_marian_enro.sh). \r\n\r\n+ It should take 12-25 minutes on 1 GPU.\r\n+ I don't know the BLEU/timing to expect, you should again run on `master` and `bart-refactor` to compare.\r\n\r\n\r\n\r\n\r\n", "> @patrickvonplaten I would measure impact of running cnn summarization (which uses seq_len 1024) to fully acknowledge the trade off you are making:\r\n> \r\n> ```shell\r\n> cd examples/seq2seq\r\n> python run_eval.py facebook/bart-large-cnn cnn_dm/test.source cnn_gens.txt \\\r\n> \t--reference_path cnn_dm/test.target \\\r\n> \t--score_path cnn_rouge.json --task summarization \\\r\n> \t--n_obs 500 --fp16\r\n> ```\r\n> \r\n> This should take ~5 mins per branch.\r\n> \r\n> Otherwise, LGTM! Thanks for cleaning up after me :)\r\n\r\n\r\nApplied as much perf improvement as possible -> Time from master to this PR for the above command is reduced from ~2min10s to ~2min05s (ran three times) = 2.5% speed-up. Removed as many `contiguous()` operations as possible", "Training gives good/equal results to master. However, I see a 5% slow-down in training. `generation()` is as fast or faster than master, but training yields a slow-down of around 5% => so still investigating. Could be the masks", "Reran the fine-tuning script a couple of times on a gcp instance so that no other tasks can interfere and float masks are actually faster than boolean masks and and give more or less same results than previous bart model on master. Here the results: \r\n\r\n\r\nRefactor (no boolean masks)\r\n\r\n![bart_refactor_branch](https://user-images.githubusercontent.com/23423619/101674871-548d9f80-3a59-11eb-9f6c-0cd33668be22.png)\r\n\r\nmaster\r\n\r\n![bart_master](https://user-images.githubusercontent.com/23423619/101674915-62dbbb80-3a59-11eb-835f-5b84c2e8150d.png)\r\n", "good to merge for me" ]
1,606
1,636
1,607
MEMBER
null
# What does this PR do? This PR refactors the Bart model. The goal is to fix a couple of bugs related to Bart, make Bart more consistent with other models in the library and make Bart the "default" Seq2Seq template model for other models. The PR may be a bit difficult to review, so the following sections lists the main changes and the reasons why they are taken. ## In-detail explanation of main changes 1. Fix a bug related to `past_key_values`, `use_cache` and `decoder_input_ids`. Previously it was assumed that if `use_cache=True`, then `decoder_input_ids` have to be of length 1. This is not always the case! E.g. If the first decoder_input_ids prompt is longer than 1 and `use_cache=True` this would have led to errors previously - see #7814, #6353. This is fixed now so that any length of `past_key_values` can be combined with any length of `decoder_input_ids`, just as it can be done for GPT2, T5, CTRL, ... In order to make the pt_tf_equivalence tests pass, some hotfixes are applied for TFBart. TFBart will be refactored in a later PR. A test `create_and_check_decoder_model_past_large_inputs` is added to ensure that this functionality works. 2. Allow to use `BartEncoder` and `BartDecoder` separately from the `BartModel`. Because Bart is the default seq2seq model it's a great opportunity to combine just the `BartDecoder` with other "encoder-only" models. E.g. if someone wants to run experiments on long-range summarization `Longformer-Bart` could be an interesting combination (@ibeltagy). This PR lays the groundwork to easily combine these models by making `BartEncoder` and `BartDecoder` fully functional models on their own. One should probably also add a `BartForCausalLM` class analogs to https://github.com/huggingface/transformers/blob/df311a5ccf50be3031474e289b43b1be43111144/src/transformers/models/prophetnet/modeling_prophetnet.py#L1882 (could be a good first issue). further improves how to handle an issue like #5282 3. Simplify query, key, value projections in attention layer. A rather difficult if-else cascade with a complex follow-up function to concat past_key_values is simplified to a single if-elif-elif-else clause. IMO, the code in `BartAttention.forward()` is much clearer now. 4. Change the cache from dict to tuple and make it stateless. The general design in the library is to have a stateless design for the cache. Bart previously used a dict -> this PR changes the cache to a tuple for consistency. It should also be a bit more efficient, more consistent and easier to use with torchscript and onnx. 5. Bart did a lot of dimensions transposing from time -> batch and batch -> time. This is not at all necessary IMO. We can just have the batch dimension in the first spot the whole time just like the other models do too. Therefore, I deleted a bunch of `transpose(0, 1)` operations. 6. Add inputs_embeds. Just like other models Bart can make use of inputs_embeds. 7. Rename all classes from `...Model` to `Bart...Model`. Public class names that needed to be renamed were depreciated for backwards compatibility. This is better for look-up and consistency with other models. 8. Simpler handling of attention_masks. Previously Bart moved many different masks with many different names through-out the model. This PR aligns the functionality with other models, but creating the full attention mask for each model in the beginning of `BartEncoder` and `BartDecoder` instead of doing it in the attention function. This simplifies the code and is more consistent with other models. 9. Re-structure order in `modeling_bart.py`. Usually, modeling files have helper functions in the beginning, followed by submodules, followed by docstring, followed by the pre-trained models. This PR re-orders `modeling_bart.py` accordingly. 10. Replace functionality to make lm head embeddings on-the-fly by the usual `_init_weights` tying mechanism that we have in PyTorch. This is a) much more consistent with other models and b) cleaner because we don't have to instantiate a new class each time `get_output_embeddings()` is called. Solves #5282. 11. (subjectively) better naming. Replace x -> hidden_states, etc... ## Breaking changes - There are no breaking changes to the "public" API IMO (except if it corrects a bug). `BartModel`, `BartForConditionalGeneration` and all other `BartPretrainedModel`s have exactly the same as before except for the following case which was a bug: Previously, the following code: ```python from transformers import BartForConditionalGeneration, BartTokenizer model = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn") tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") input_ids = tokenizer("the encoded sequence in all its beauty", return_tensors="pt").input_ids decoder_input_ids = tokenizer("the decoder sequence", return_tensors="pt").input_ids print(model(input_ids, decoder_input_ids=decoder_input_ids).logits.shape) ``` would have printed out only a single output because `use_cache` is enabled which was wrong because no causal-mask was used. This PR corrects the behavior so that the output seq length matches the decoder_input_ids seq lengths. - BartEncoder and BartDecoder now have a rather different API. This is OK for me since it was not possible to import the models directly and there were only model components. - sub modules of Bart are named differently, *e.g.* LayerNorm is now called BartLayerNorm. Since these modules are also not public I don't think we have to depreciate the names. - The API of `BartModel`, ... is extended by `inputs_embeds` and `decoder_inputs_embeds`. ## Review: Because Bart is the most important Seq2Seq model in the library (5 other models classes depend on it), I would be very happy for a couple of thorough reviews. Also all kinds of comments, improvements, discussions, questions are welcome! I ran all slow tests and tried to be careful with the changes. In case @sshleifer is interested I'd also be more than happy about some feedback from you ;-) ## TODO-List - [x] Keep dims consistent within the model -> no switching around between time x batch_size and batch_size x time. We can just stick to batch_size x time throughout the whole forward pass just like other models do too. - [x] Add same `lm_head` logic, other models have as well. Bart should make use of the `tie_weight_embeddings` function instead of doing weird `"on-the-fly"` output embeddings, #5282 - [x] Clean the Attention layer: Replace dict cache by past_key_values tuple (consistency with other models and stateless which is better IMO). Break up complicated if-else cascade and remove unnecessary parameters. - [x] Make Encoder/Decoder stand-alone models to be used on their own: #7127, this way pretrained weights can be used in the Encoder-Decoder framework as well. If I remember correctly @ibeltagy was interested in this as well - [x] Correct error with past_key_values/decoder_input_ids/use_cache: #7814, #6353, - [x] Make Bart torchscriptable: #6348 - [x] Add input_embeds to Bart - [x] (very subjectively) better naming - [x] Check that all slow tests are passing - ran the following slow tests: ``` [ # assumes USE_CUDA is exported, rather than passed RUN_SLOW=1 pytest tests/test_modeling_pegasus.py RUN_SLOW=1 pytest tests/test_modeling_bart.py RUN_SLOW=1 pytest tests/test_modeling_marian.py RUN_SLOW=1 pytest tests/test_modeling_mbart.py RUN_SLOW=1 pytest tests/test_modeling_fsmt.py RUN_SLOW=1 pytest tests/test_modeling_blenderbot.py RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_conversational.py RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_text2text_generation.py RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_summarization.py RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_translation.py RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_dialog.py ] ``` => MBartEnroIntegrationTest.test_enro_generate_batch fails on PR, but also on master with the same message, so that's ok for me! - [x] Update docstring and final design change check - [x] Refactor Bart tests - [x] Check no speed regression - [x] Check no training performance regression (Is there a good fine-tuning script I can run for this @patil-suraj, @sshleifer)?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8900/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8900", "html_url": "https://github.com/huggingface/transformers/pull/8900", "diff_url": "https://github.com/huggingface/transformers/pull/8900.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8900.patch", "merged_at": 1607543725000 }
https://api.github.com/repos/huggingface/transformers/issues/8899
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8899/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8899/comments
https://api.github.com/repos/huggingface/transformers/issues/8899/events
https://github.com/huggingface/transformers/issues/8899
755,235,575
MDU6SXNzdWU3NTUyMzU1NzU=
8,899
Wrong Length of Dataset in examples/seq2seq/finetune_trainer.py
{ "login": "iseesaw", "id": 31267864, "node_id": "MDQ6VXNlcjMxMjY3ODY0", "avatar_url": "https://avatars.githubusercontent.com/u/31267864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iseesaw", "html_url": "https://github.com/iseesaw", "followers_url": "https://api.github.com/users/iseesaw/followers", "following_url": "https://api.github.com/users/iseesaw/following{/other_user}", "gists_url": "https://api.github.com/users/iseesaw/gists{/gist_id}", "starred_url": "https://api.github.com/users/iseesaw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iseesaw/subscriptions", "organizations_url": "https://api.github.com/users/iseesaw/orgs", "repos_url": "https://api.github.com/users/iseesaw/repos", "events_url": "https://api.github.com/users/iseesaw/events{/privacy}", "received_events_url": "https://api.github.com/users/iseesaw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
NONE
null
`train/validation/test examples. -1 means use all` may be not correct https://github.com/huggingface/transformers/blob/693ac3594b96e86dd282fdf8e413f3a48b176892/examples/seq2seq/finetune_trainer.py#L97-L99 `n_train/val/test` is used to compute the length of the dataset, there will lack one line if set to -1. It should be None to use all examples. https://github.com/huggingface/transformers/blob/693ac3594b96e86dd282fdf8e413f3a48b176892/examples/seq2seq/utils.py#L136-L137
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8899/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8898
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8898/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8898/comments
https://api.github.com/repos/huggingface/transformers/issues/8898/events
https://github.com/huggingface/transformers/issues/8898
755,227,251
MDU6SXNzdWU3NTUyMjcyNTE=
8,898
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-Q_fyRn/sacrebleu/
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Installing transformers is also broken\r\n\r\n(test) rabeeh@gpu4:~/transformers/examples/seq2seq$ pip install git+https://github.com/huggingface/transformers.git\r\nCollecting git+https://github.com/huggingface/transformers.git\r\n Cloning https://github.com/huggingface/transformers.git to /tmp/pip-req-build-V7nNeF\r\n Installing build dependencies ... done\r\n Complete output from command python setup.py egg_info:\r\n Traceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/tmp/pip-req-build-V7nNeF/setup.py\", line 156\r\n entries = \"\\n\".join([f' \"{k}\": \"{v}\",' for k, v in deps.items()])\r\n ^\r\n SyntaxError: invalid syntax\r\n \r\n ----------------------------------------\r\nCommand \"python setup.py egg_info\" failed with error code 1 in /tmp/pip-req-build-V7nNeF/\r\n", "issue solved with python = 3.7" ]
1,606
1,606
1,606
NONE
null
Hi I am trying with master branch getting this error when installign requirements inside examples thanks. Collecting seqeval (from -r ../requirements.txt (line 3)) Downloading https://files.pythonhosted.org/packages/9d/2d/233c79d5b4e5ab1dbf111242299153f3caddddbb691219f363ad55ce783d/seqeval-1.2.2.tar.gz (43kB) 100% |████████████████████████████████| 51kB 13.4MB/s Collecting psutil (from -r ../requirements.txt (line 4)) Downloading https://files.pythonhosted.org/packages/33/e0/82d459af36bda999f82c7ea86c67610591cf5556168f48fd6509e5fa154d/psutil-5.7.3.tar.gz (465kB) 100% |████████████████████████████████| 471kB 2.7MB/s Collecting sacrebleu (from -r ../requirements.txt (line 5)) Downloading https://files.pythonhosted.org/packages/b9/d6/258a1e63463b4731a387f0872dca759c330bf4845cc0464f2c65028674b6/sacrebleu-1.3.7.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-Q_fyRn/sacrebleu/setup.py", line 65, in <module> version = get_version(), File "/tmp/pip-install-Q_fyRn/sacrebleu/setup.py", line 56, in get_version with open(os.path.join(os.path.dirname(__file__), 'sacrebleu.py'), encoding='utf-8') as fin: TypeError: 'encoding' is an invalid keyword argument for this function ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-Q_fyRn/sacrebleu/
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8898/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8897
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8897/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8897/comments
https://api.github.com/repos/huggingface/transformers/issues/8897/events
https://github.com/huggingface/transformers/issues/8897
755,208,628
MDU6SXNzdWU3NTUyMDg2Mjg=
8,897
finetune_trainer with python -m torch.distributed.launch
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Also, could you add the command to run distributed training with GPUs with finetune_trainer in README? thanks ", "I tried with latest version of transformers on 4 gpu with distributed training \r\n\r\nrabeeh@gpu4:~/transformers/examples/seq2seq$ python -m torch.distributed.launch finetune.py --learning_rate=3e-5 --fp16 --gpus 4 --do_train --do_predict --n_val 1000 --val_check_interval 0.1 --data_dir wmt_en_ro --train_batch_size=1 --eval_batch_size=1 --output_dir=xsum_results --num_train_epochs 1 --model_name_or_path t5-smal\r\n\r\ngetting the following error, thanks \r\n\r\nfinetune.py: error: unrecognized arguments: --local_rank=0\r\nTraceback (most recent call last):\r\n File \"/opt/conda/envs/transformers/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/opt/conda/envs/transformers/lib/python3.7/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/opt/conda/envs/transformers/lib/python3.7/site-packages/torch/distributed/launch.py\", line 260, in <module>\r\n main()\r\n File \"/opt/conda/envs/transformers/lib/python3.7/site-packages/torch/distributed/launch.py\", line 256, in main\r\n cmd=cmd)\r\nsubprocess.CalledProcessError: Command '['/opt/conda/envs/transformers/bin/python', '-u', 'finetune.py', '--local_rank=0', '--learning_rate=3e-5', '--fp16', '--gpus', '4', '--do_train', '--do_predict', '--n_val', '1000', '--val_check_interval', '0.1', '--data_dir', 'wmt_en_ro', '--train_batch_size=1', '--eval_batch_size=1', '--output_dir=xsum_results', '--num_train_epochs', '1', '--model_name_or_path', 't(trans(tr(transformers) \r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "@rabeehk I also encountered the same problem as you. Have you solved it?" ]
1,606
1,621
1,619
NONE
null
Hi I need to run the finetune_trainer with multiple gpus, I am getting the error of "Default process group is not initialized" AssertionError: Default process group is not initialized I am using custom dataloader, might be hard to share all the parts of codes, but I defined sampler as DistributedSampler. this is transformer 3.5.1, python 3.7 on GPU. thanks Best Rabeeh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8897/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8896
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8896/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8896/comments
https://api.github.com/repos/huggingface/transformers/issues/8896/events
https://github.com/huggingface/transformers/pull/8896
755,203,918
MDExOlB1bGxSZXF1ZXN0NTMwOTYwODY3
8,896
Create README.md
{ "login": "snunlp", "id": 58285171, "node_id": "MDQ6VXNlcjU4Mjg1MTcx", "avatar_url": "https://avatars.githubusercontent.com/u/58285171?v=4", "gravatar_id": "", "url": "https://api.github.com/users/snunlp", "html_url": "https://github.com/snunlp", "followers_url": "https://api.github.com/users/snunlp/followers", "following_url": "https://api.github.com/users/snunlp/following{/other_user}", "gists_url": "https://api.github.com/users/snunlp/gists{/gist_id}", "starred_url": "https://api.github.com/users/snunlp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/snunlp/subscriptions", "organizations_url": "https://api.github.com/users/snunlp/orgs", "repos_url": "https://api.github.com/users/snunlp/repos", "events_url": "https://api.github.com/users/snunlp/events{/privacy}", "received_events_url": "https://api.github.com/users/snunlp/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Looks like this PR was unfortunately broken, so I'm going to close it. Also noting that the way to update a model card now is to update it directly in your model repo! see https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755" ]
1,606
1,607
1,607
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8896/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8896", "html_url": "https://github.com/huggingface/transformers/pull/8896", "diff_url": "https://github.com/huggingface/transformers/pull/8896.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8896.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8895
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8895/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8895/comments
https://api.github.com/repos/huggingface/transformers/issues/8895/events
https://github.com/huggingface/transformers/pull/8895
755,202,087
MDExOlB1bGxSZXF1ZXN0NTMwOTU5NDMy
8,895
Create README.md
{ "login": "snunlp", "id": 58285171, "node_id": "MDQ6VXNlcjU4Mjg1MTcx", "avatar_url": "https://avatars.githubusercontent.com/u/58285171?v=4", "gravatar_id": "", "url": "https://api.github.com/users/snunlp", "html_url": "https://github.com/snunlp", "followers_url": "https://api.github.com/users/snunlp/followers", "following_url": "https://api.github.com/users/snunlp/following{/other_user}", "gists_url": "https://api.github.com/users/snunlp/gists{/gist_id}", "starred_url": "https://api.github.com/users/snunlp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/snunlp/subscriptions", "organizations_url": "https://api.github.com/users/snunlp/orgs", "repos_url": "https://api.github.com/users/snunlp/repos", "events_url": "https://api.github.com/users/snunlp/events{/privacy}", "received_events_url": "https://api.github.com/users/snunlp/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Looks like this PR was unfortunately broken, so I'm going to close it. Also noting that the way to update a model card now is to update it directly in your model repo! see https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755" ]
1,606
1,607
1,607
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8895/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8895", "html_url": "https://github.com/huggingface/transformers/pull/8895", "diff_url": "https://github.com/huggingface/transformers/pull/8895.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8895.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8894
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8894/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8894/comments
https://api.github.com/repos/huggingface/transformers/issues/8894/events
https://github.com/huggingface/transformers/issues/8894
755,118,625
MDU6SXNzdWU3NTUxMTg2MjU=
8,894
custom prepare_inputs_for_generation for generation
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "solved with implementing it inside the model of T5ForConditionalGeneration thanks" ]
1,606
1,606
1,606
NONE
null
Hi I need to change model_inputs used for the generation, I am using T5ForConditionalGeneration which has extra input parameter and this needs to be passed in each time I call model.generate(), I cannot see how to rewrite the generate function to also pass this argument, could you provide me with some explanation: https://github.com/huggingface/transformers/blob/d5b3e56de5376aa85ef46e7f0325139d9e299a41/src/transformers/generation_utils.py#L676 thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8894/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8893
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8893/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8893/comments
https://api.github.com/repos/huggingface/transformers/issues/8893/events
https://github.com/huggingface/transformers/issues/8893
755,049,449
MDU6SXNzdWU3NTUwNDk0NDk=
8,893
[🚀 Feature request] Performer support, tensorflow code, not jax.
{ "login": "guotong1988", "id": 4702353, "node_id": "MDQ6VXNlcjQ3MDIzNTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4702353?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guotong1988", "html_url": "https://github.com/guotong1988", "followers_url": "https://api.github.com/users/guotong1988/followers", "following_url": "https://api.github.com/users/guotong1988/following{/other_user}", "gists_url": "https://api.github.com/users/guotong1988/gists{/gist_id}", "starred_url": "https://api.github.com/users/guotong1988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guotong1988/subscriptions", "organizations_url": "https://api.github.com/users/guotong1988/orgs", "repos_url": "https://api.github.com/users/guotong1988/repos", "events_url": "https://api.github.com/users/guotong1988/events{/privacy}", "received_events_url": "https://api.github.com/users/guotong1988/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "https://github.com/huggingface/transformers/issues/7675" ]
1,606
1,606
1,606
CONTRIBUTOR
null
https://arxiv.org/abs/2009.14794 Thank you thank you very much.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8893/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8892
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8892/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8892/comments
https://api.github.com/repos/huggingface/transformers/issues/8892/events
https://github.com/huggingface/transformers/pull/8892
755,037,326
MDExOlB1bGxSZXF1ZXN0NTMwODI2MDgw
8,892
TFRag draft #1 (page BROKEN) - Should close and use #9002 instead
{ "login": "ratthachat", "id": 56621342, "node_id": "MDQ6VXNlcjU2NjIxMzQy", "avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ratthachat", "html_url": "https://github.com/ratthachat", "followers_url": "https://api.github.com/users/ratthachat/followers", "following_url": "https://api.github.com/users/ratthachat/following{/other_user}", "gists_url": "https://api.github.com/users/ratthachat/gists{/gist_id}", "starred_url": "https://api.github.com/users/ratthachat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ratthachat/subscriptions", "organizations_url": "https://api.github.com/users/ratthachat/orgs", "repos_url": "https://api.github.com/users/ratthachat/repos", "events_url": "https://api.github.com/users/ratthachat/events{/privacy}", "received_events_url": "https://api.github.com/users/ratthachat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi guys, most commits were from the previous PR (TFDPR). I do not know how to remove them, sorry!\r\nOnly `modeling_tf_rag.py` is new here.", "Awesome work @ratthachat!!!\r\n\r\nFor the input, I think the best way is to check how it is done in TF BERT for example and if you have difficulties to understand you can ask your questions here :)\r\n\r\nAbout the `.numpy()` as suggested you can use the `tf.make_ndarra()` like this `tf.make_ndarray(tf.make_tensor_proto(my_tensor))`.\r\n\r\nAbout the weights, I suggest you to check how it is done in TF BART there are similar weights naming.", "> Awesome work @ratthachat!!!\r\n> \r\n> For the input, I think the best way is to check how it is done in TF BERT for example and if you have difficulties to understand you can ask your questions here :)\r\n> \r\n> About the `.numpy()` as suggested you can use the `tf.make_ndarra()` like this `tf.make_ndarray(tf.make_tensor_proto(my_tensor))`.\r\n> \r\n> About the weights, I suggest you to check how it is done in TF BART there are similar weights naming.\r\n\r\nHi Julien @jplu , thanks for the reply!\r\nOn these points, I think I may miss something simple, but I could not solve the puzzle by myself at this moment.\r\n1) on `tf.make_ndarray(tf.make_tensor_proto(my_tensor))` , I could make it work **only** on eager mode too, so still have the problem in graph mode (I could not make it work inside @tf.function)\r\n\r\n2) about the input, yes I tried to replicate TF Bert & TF DPR (which is previously my implement) e.g. replace `inputs` with `input_ids` with/without default value (`None`) and use `input_processing` , but no matter what I tried , the simple call got error\r\n\r\n```\r\noutputs = model(inputs)\r\nValueError: The first argument to `Layer.call` must always be passed.\r\n```\r\n\r\nI really missed something simple here, I will try to play around again.\r\n\r\n3) about the weights, yes, I tried to replicate other TF models, so all weights loading works in `.from_pretrained_question_encoder_generator` and mostly properly loaded in `.from_pretrained`. \r\nOnly two aforementioned weights could not have the correct name . I tried various fixing to these minor cases but could not, except my very ugly manual load.", "1. Arf, I thought that this line would automatically deactivate the graph execution, but it is not the case. So to make it short, convert a graph tensor to numpy array is not possible because the graph does not execute in Python - so there is no numpy at graph execution. The only one work around would be to play with [tf.py_function](https://www.tensorflow.org/api_docs/python/tf/py_function) but you will literally kill the perf with this, even though it is your only way to go to.\r\n\r\n2. This error means that you are not passing the first argument to a call method (basically don't pass any `input_ids`), you always have to pass it. Where in the code the error is raised?\r\n\r\n3. What did you try more precisely?", "Hi again Julien,\r\n\r\n1. I see! Let us work only on eager mode for now and come back later.\r\n\r\n2. Here's 4.1.0 colab where I try to modify 4.1.0 input on `TFRagModel` (Cell 6) \r\nhttps://colab.research.google.com/drive/1RvtOxUIravWEkwMnj48pedv2mFnlWYkY?usp=sharing\r\nPlease see Cell 9, to see various ways I try to pass the first argument. Really sorry I think I overlooked some simple things here.\r\n\r\n3. I tried to adjust `base_prefix_name`, and also setting module `name` as discussed with Sam [here](https://discuss.huggingface.co/t/solved-issue-on-translating-dpr-to-tfdpr-on-loading-pytorch-weights-to-tf-model/1764/2): \r\n", "1. The problem is that you cannot do anything with the model if it cannot be run in graph mode (no serving, no training, no optimization, very slow) 😢 \r\n2. Please, see how it is done in all the other TF implementations, the way you handle the inputs in `modeling_tf_rag.py` is wrong.\r\n3. I don't really have time this week to go deeper in this, but I will take some time on Monday to do it!\r\n\r\nA test file is missing as well, having it might help you to detect what has to be updated :)", "Hi Julien!\r\n\r\n> 1. The problem is that you cannot do anything with the model if it cannot be run in graph mode (no serving, no training, no optimization, very slow) 😢\r\n\r\nI got it. I think we can do some work around on TFRag model training in graph mode. Instead of fitting the model with `input_ids` which will need `.numpy` and `retriever` , we can do all retriever stuffs offline (or with tf.Dataset) first to get the `context_input_ids` and then feed them directly to the trainnig loop. I will test this idea.\r\n\r\n> 2. Please, see how it is done in all the other TF implementations, the way you handle the inputs in `modeling_tf_rag.py` is wrong.\r\n\r\nFinally, I was able to find a single line that's wrong :) I will update the file in my local.\r\n\r\n> 3. I don't really have time this week to go deeper in this, but I will take some time on Monday to do it!\r\n> A test file is missing as well, having it might help you to detect what has to be updated :)\r\n\r\nThanks Julien. I will write the full test file soon. My reason is that I need some suggestions (as commented in the modeling file) for my current draft at this stage , so I have not written the full-fledge test file yet. The colab I posted did have some 10+ basic tests, which I still think I need to (cleanly) pass all these tests first. \r\n", "> I got it. I think we can do some work around on TFRag model training in graph mode. Instead of fitting the model with input_ids which will need .numpy and retriever , we can do all retriever stuffs offline (or with tf.Dataset) first to get the context_input_ids and then feed them directly to the trainnig loop. I will test this idea.\r\n\r\nI was thinking the same :) to do the search offline, might be a solution. I think the very long term solution would be to use something more adapted to TensorFlow than FAISS such as [SCANN](https://github.com/google-research/google-research/tree/master/scann). (Pinging @lhoestq to know what he is thinking about this :) )\r\n\r\n> Finally, I was able to find a single line that's wrong :) I will update the file in my local.\r\n\r\nNice!! Just to let you know that Friday we have also updated the way the handle the booleans, so be careful to integrate this as well.\r\n\r\n> Thanks Julien. I will write the full test file soon. My reason is that I need some suggestions (as commented in the modeling file) for my current draft at this stage , so I have not written the full-fledge test file yet. The colab I posted did have some 10+ basic tests, which I still think I need to (cleanly) pass all these tests first.\r\n\r\nIt is ok, we are not in a hurry, take your time ^^", "@ratthachat - I think you're on the correct track here :-)\r\n\r\nTFRag will actually be the first TF composite model, so I'm quite certain you'll run into problems here that we haven't seen before. \r\nIn general I think:\r\n\r\n1) We should try to make the `from_pretrained` method work in a nice way (this will actually also show us how `TFEncoderDecoder` could be implemented). I'll take a look at this :-) \r\n\r\n2) Make `TFRagTokenForGeneration` work with integration tests that the model behaves the same way as PT using the faiss index. It's fine for me if it works only in eager mode for now. Maybe we can think about a different solution at a later stage if it's impossible to have RAG + Faiss in graph mode. I think it's a bit out-of-the-scope to integrate SCANN here with RAG.\r\n\r\n3) Add the other functionalities. \r\n\r\nI'll try to help you with 1) here - will add some commits to your PR.", "> We should try to make the from_pretrained method work in a nice way (this will actually also show us how TFEncoderDecoder could be implemented). I'll take a look at this :-)\r\n\r\nI would like to remove all the `from_pretrained` calls from the model implementation, it will raises issues for some usage, such as training.\r\n\r\n> Make TFRagTokenForGeneration work with integration tests that the model behaves the same way as PT using the faiss index. It's fine for me if it works only in eager mode for now. Maybe we can think about a different solution at a later stage if it's impossible to have RAG + Faiss in graph mode. I think it's a bit out-of-the-scope to integrate SCANN here with RAG.\r\n\r\nThe problem here is that if the model runs only in eager mode, the model won't be able to be served properly and then becomes useless. I don't see the point to have a model that runs only in your console locally :( The best solution IMHO would be to run the FAISS search offline.", "Hey @ratthachat, \r\n\r\nI think the `from_pretrained()` functionality now works as expected. I removed some hacks and we shouldn't have to define any `from_pretrained()` method actually.\r\n\r\nI've added two tests. 1 already passes (great job! - I didn't really change anything here...), the other one (a very difficult one based on `generate()` does not pass yet). It'll be quite difficult to make the other one pass, but if you manage you'll certainly have an in-depth knowledge of how `generate()` works.\r\n\r\nI would recommend the following next steps for the PR:\r\n\r\n1) Implement @jplu's simplified handling of the inputs as proposed in this PR: https://github.com/huggingface/transformers/pull/8602 . This should remove a lot of boiler plate code. If some params are not supported yet, I'm sure @jplu can help. \r\n\r\n2) Make the generate test pass\r\n\r\nAfter this I'm happy to take another look :-) \r\n\r\nLemme know if you have any problems with the weight loading. It all worked nicely for me", "> Implement @jplu's simplified handling of the inputs as proposed in this PR: #8602 . This should remove a lot of boiler plate code. I some params are not supported yet, I'm sure @jplu can help.\r\n\r\nI will be happy to help. Which ones are the \"not supported yet\"?", "> > Implement @jplu's simplified handling of the inputs as proposed in this PR: #8602 . This should remove a lot of boiler plate code. If some params are not supported yet, I'm sure @jplu can help.\r\n> \r\n> I will be happy to help. Which ones are the \"not supported yet\"?\r\n\r\n\r\n\r\n> > Implement @jplu's simplified handling of the inputs as proposed in this PR: #8602 . This should remove a lot of boiler plate code. If some params are not supported yet, I'm sure @jplu can help.\r\n> \r\n> I will be happy to help. Which ones are the \"not supported yet\"?\r\n\r\nI think it should all work perfectly fine! Sorry, this came across a bit bad - meant to say \"just in case\" something doesn't work don't hesitate to ping you ;-) Wasn't sure if int inputs like `n_docs` are supported, but I think this was added as well - so it should all work fine :-) ", "Thanks so much for your great help, Patrick! I will carefully look in each point you made. Full addressing will take a while, but I will be back. For now I have some initial responses: \r\n\r\n- (Need help the most) Unfortunately ,there's still a bug in weight loading if removing the hack (please see details in below thread)\r\n\r\n- About graph mode & training, I think we can consistently combine we three's thoughts here by (a) Finish the code in eager mode first -- (b) Make minimal changes to support offline-mode for retriever (or maybe not change anything at all) -- (c) Make a community notebook to guide this offline-retrieved training in graph mode. -- (d) SCANN will be an interesting long-term solution we can discuss after all these stuffs.\r\n\r\n- I think there is similar graph-retrieval problem also in TFDPR training (which we haven't tested) , so I will also try make some example notebook to train TFDPR in graph mode using this offline principle.\r\n\r\n- May I ask what is the meaning of these original Pytorch's 3 lines? (in `def generate()` )\r\n```\r\n # retrieved_doc_embeds = retrieved_doc_embeds.to(question_hidden_states)\r\n # context_input_ids = context_input_ids.to(input_ids)\r\n # context_attention_mask = context_attention_mask.to(input_ids)\r\n```\r\n\r\n- About test on `generate`, I will try give it a shot. BTW, I previously test `Bart` vs. `TFBart` and found out that they produce **\"different\"** `generate` results as well. \r\nDo you have the same experience, and will this affect RAG `generate` test ??\r\n", "> I think the `from_pretrained()` functionality now works as expected. I removed some hacks and we shouldn't have to define any `from_pretrained()` method actually.\r\n> \r\n> Lemme know if you have any problems with the weight loading. It all worked nicely for me\r\n\r\nHi Patrick, @patrickvonplaten \r\nUnfortunately, I found the same bug prior to my hack. \r\n\r\n- `from_pretrained_question_encoder_generator` <-- Work great\r\n\r\n- `from_pretrained` <-- **BUG** (only on **_2 weights_**) : `['model.shared.weight', 'final_logits_bias']`\r\n\r\n```\r\nSome weights or buffers of the TF 2.0 model TFRagTokenForGeneration were not initialized from the PyTorch model and are newly initialized: ['model.shared.weight', 'final_logits_bias']\r\n```\r\n\r\n- (New found) local loading `from_pretrained(\"./rag\")` <-- **BUG** on **all weights**\r\n\r\nPlease (please :) take a look at this new colab which provides \"minimal\" code to show the bugs (just 8 cells).\r\nhttps://colab.research.google.com/drive/1s-j9PB9yzrFsL6q5rZUQyf8_Lt6jDAkL?usp=sharing\r\nBugs only in the last 2 cells.", "> > I think the `from_pretrained()` functionality now works as expected. I removed some hacks and we shouldn't have to define any `from_pretrained()` method actually.\r\n> > Lemme know if you have any problems with the weight loading. It all worked nicely for me\r\n> \r\n> Hi Patrick, @patrickvonplaten\r\n> Unfortunately, I found the same bug prior to my hack.\r\n> \r\n> * `from_pretrained_question_encoder_generator` <-- Work great\r\n> * `from_pretrained` <-- **BUG** (only on **_2 weights_**) : `['model.shared.weight', 'final_logits_bias']`\r\n\r\nThis is not a bug. It's fine actually. Those weights are handled differently in TF and PT, so this message is expected.\r\n\r\n> \r\n> ```\r\n> Some weights or buffers of the TF 2.0 model TFRagTokenForGeneration were not initialized from the PyTorch model and are newly initialized: ['model.shared.weight', 'final_logits_bias']\r\n> ```\r\n> \r\n> * (New found) local loading `from_pretrained(\"./rag\")` <-- **BUG** on **all weights**\r\n\r\nLet me look into this!\r\n\r\n> \r\n> Please (please :) take a look at this new colab which provides \"minimal\" code to show the bugs (just 8 cells).\r\n> https://colab.research.google.com/drive/1s-j9PB9yzrFsL6q5rZUQyf8_Lt6jDAkL?usp=sharing\r\n> Bugs only in the last 2 cells.", "@ratthachat,\r\n\r\nI think you can ignore those warnings for now. A good next step to make sure that the `from_pretrained()` methods work correctly is to add tests that verify that after saving/loading the model yields the same output as before: \r\n- https://github.com/huggingface/transformers/blob/9d7d0005b046a95d9d59354714bb6c3547a612fe/tests/test_modeling_rag.py#L900\r\n\r\nI checked and the following code works fully as expected:\r\n\r\n```\r\nfrom transformers import RagRetriever, TFRagTokenForGeneration\r\n\r\nretriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\nmodel = TFRagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\", from_pt=True, retriever=retriever)\r\n\r\nmodel.save_pretrained(\"./rag\")\r\nmodel = TFRagTokenForGeneration.from_pretrained(\"./rag\", retriever=retriever)\r\n```\r\n\r\nAll those commands work as they should, so I think we're good for now with the `from_pretrained()`. I think the next step should be to concentrate on removing the TF input boilerplate code and then making the generation work.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? Hi guys, this is the draft WIP of TFRag. It is runnable in eager mode with mostly proper outputs. I really need help/consult at this stage especially from @Patrick and @jplu . In this draft, only `modeling_tf_rag.py` is new here. ## What is done and test. (working on HF 4.0.0, but not on Master due to changes in TF input) - TFRagModel - TFRagTokenForGeneration - generate function with no_beam_search - work on eager mode - Colab notebook to test/play around whether the code works properly : https://colab.research.google.com/drive/1CfCulkKGrneiQ0gV0Bgdo71gZ_kgMRIB?usp=sharing ## Main things not done yet - TFRagSequenceForGeneration - beam_search generation (may wait for TF generation refactor ?) - Working in graph mode (due to the need of `.numpy()` for retriever calling, and this doesn't work on graph mode) - Change input format for HF 4.1.0 (need help from @jplu) ## Need your suggestion on NEED_ADVICE, NEED_HELP As stated, the code is mostly OK except on the points I marked TOFIX which will be clean later during finishing the draft. However, there are 2 categories that I really need help especially from @Patrick: 1) There are some codes that work, but I am not sure if I meet Huggingface coding standard (marked by NEED_ADVICE) 2) There are 2 points that I need real help (marked by NEED_HELP ) 2.1) the aforementioned `.numpy()` on graph mode. 2.2) about `.from_pretrained` : Rag has two loading methods which are `.from_pretrained_question_encoder_generator` and `.from_pretrained` . While` .from_pretrained_question_encoder_generator` works , in `.from_pretrained` there are two weights that the names do not match , which I could not find any way to fix : ``` 'rag.generator.model.shared.weight', 'rag.generator.final_logits_bias' --> Pytorch name 'model.shared.weight', 'final_logits_bias' --> TF name ``` So at the moment I made UGLY fix by overwrite .from_pretrained and manually loading these two weights. ## Who can review? TFRag : @patrickvonplaten , new TF input for master / 4.1.0 : @jplu about graph mode & retriever module : will need help from @lhoestq later once all other issues are fixed :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8892/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8892", "html_url": "https://github.com/huggingface/transformers/pull/8892", "diff_url": "https://github.com/huggingface/transformers/pull/8892.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8892.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8891
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8891/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8891/comments
https://api.github.com/repos/huggingface/transformers/issues/8891/events
https://github.com/huggingface/transformers/issues/8891
755,025,828
MDU6SXNzdWU3NTUwMjU4Mjg=
8,891
providing an example with a dummy iterative dataloaders
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Trainer does indeed not work in distributed fashion with iterative datasets. You need to convert your iterative dataset to a regular dataset for the time being.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
NONE
null
Hi I have tested the trainer.py with iterative datasets and this does not work in distributed case, I shard the data across the cores. Could you please assist me in providing me with a dummy iterative dataloader for finetune_seq2seq.py model which runs fine with xla_spawn.py on TPU so I get some understanding which functions needs to be implemented. I really need to make this work, and trainer.py does not seem to work with iterative datasets or I am missing how to do it properly. thanks @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8891/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8890
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8890/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8890/comments
https://api.github.com/repos/huggingface/transformers/issues/8890/events
https://github.com/huggingface/transformers/pull/8890
754,924,044
MDExOlB1bGxSZXF1ZXN0NTMwNzMyOTY1
8,890
Update generation_beam_search.py
{ "login": "ZhaoQianfeng", "id": 53401404, "node_id": "MDQ6VXNlcjUzNDAxNDA0", "avatar_url": "https://avatars.githubusercontent.com/u/53401404?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhaoQianfeng", "html_url": "https://github.com/ZhaoQianfeng", "followers_url": "https://api.github.com/users/ZhaoQianfeng/followers", "following_url": "https://api.github.com/users/ZhaoQianfeng/following{/other_user}", "gists_url": "https://api.github.com/users/ZhaoQianfeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZhaoQianfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhaoQianfeng/subscriptions", "organizations_url": "https://api.github.com/users/ZhaoQianfeng/orgs", "repos_url": "https://api.github.com/users/ZhaoQianfeng/repos", "events_url": "https://api.github.com/users/ZhaoQianfeng/events{/privacy}", "received_events_url": "https://api.github.com/users/ZhaoQianfeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @ZhaoQianfeng,\r\n\r\nThanks a lot for making the PR. I thought about this a bit and I think we don't have to change anything actually.\r\n\r\nThe reason is the following. Let's say you want to generate up to a length of 5.\r\n\r\nBOS is the start token which should be counted as part of the input length. However EOS should not be counted towards the sequence length for the length penalty IMO since it's the trigger to finish generation. \r\n\r\nSo the input:\r\n\r\n[BOS, hey, there, EOS] -> is ok for me to be counted as a sequence length of 3 ([BOS, Hey, there]) for the length penalty. I don't think the EOS token itself should penalize.\r\nHowever, un unfinished generation, such as [BOS, hey, there, how, are] should be counted to have a sequence length of 5 since it isn't finished.\r\n\r\nIf we would merge this PR, this would mean that [BOS, hey, there, peter, EOS] would receive the same length penalty as [BOS, hey, there, how, are], but IMO they should not. The first sequence is finished (*i.e.* shorter) than the second one.\r\n\r\nSo I'd prefer to leave it as it is. I think it's the right approach. Thanks a lot for looking into this however :-) ", "Hey @patrickvonplaten ,\r\nI think you are right!\r\n\r\nFor your example, my reason why ` [BOS, hey, there, peter, EOS]` and `[BOS, hey, there, how, are] `should have same length penalty is that the former probability is calculated by `log(P(hey))+log(P(there))+log(P(peter))+log(P(EOS))`, and the latter probability is calculated by `log(P(hey))+log(P(there)+log(P(how)+log(P(are))`, both 4 elements.So I used to think that they should be divided by same length.\r\n\r\nBut I think your explanation is more reasonable and convincing, the former sentence is actually shorter than the latter sentence!It should be what **length peanalty** really means. Thank you for taking the time to discuss this issue!:-)" ]
1,606
1,607
1,607
NONE
null
BeamHypotheses.add() now behave differently depending on whether it finished with or without EOS token. # What does this PR do? see the discussion here #8722 ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8890/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8890", "html_url": "https://github.com/huggingface/transformers/pull/8890", "diff_url": "https://github.com/huggingface/transformers/pull/8890.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8890.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8889
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8889/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8889/comments
https://api.github.com/repos/huggingface/transformers/issues/8889/events
https://github.com/huggingface/transformers/issues/8889
754,841,386
MDU6SXNzdWU3NTQ4NDEzODY=
8,889
trainer.py does not handle distributed training for iterative datasets and is very slow
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
NONE
null
## Environment info - `transformers` version: 3.5.1 - Platform: TPU - Python version: 3.7 - using xla_spawn.py ### Who can help @sgugger @patrickvonplaten @patrickvonplaten @patil-suraj ## Information I am running seq2seq_finetune.py with iterative datasets and I do not get any speed up for 8 TPU cores versus 1 TPU cores, the code is also even slower than 1 GPU. ## To reproduce ``` git clone [email protected]:google-research/ruse.git go to iter branch pip install -r requirements.txt python setup.py develop cd seq2seq python xla_spawn.py finetune_t5_trainer.py configs/mrpc_adapter_tpu.json ``` ## Expected behavior Being faster on TPU, to me trainer.py does not handle iterative datasets properly. could you have a look please? thank you for your help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8889/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8888
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8888/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8888/comments
https://api.github.com/repos/huggingface/transformers/issues/8888/events
https://github.com/huggingface/transformers/issues/8888
754,816,355
MDU6SXNzdWU3NTQ4MTYzNTU=
8,888
clip_grad_norm on Multiple GPUs: (CUDA error: device-side assert triggered)
{ "login": "apteryxlabs", "id": 65966807, "node_id": "MDQ6VXNlcjY1OTY2ODA3", "avatar_url": "https://avatars.githubusercontent.com/u/65966807?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apteryxlabs", "html_url": "https://github.com/apteryxlabs", "followers_url": "https://api.github.com/users/apteryxlabs/followers", "following_url": "https://api.github.com/users/apteryxlabs/following{/other_user}", "gists_url": "https://api.github.com/users/apteryxlabs/gists{/gist_id}", "starred_url": "https://api.github.com/users/apteryxlabs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apteryxlabs/subscriptions", "organizations_url": "https://api.github.com/users/apteryxlabs/orgs", "repos_url": "https://api.github.com/users/apteryxlabs/repos", "events_url": "https://api.github.com/users/apteryxlabs/events{/privacy}", "received_events_url": "https://api.github.com/users/apteryxlabs/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I would guess this is a memory error. Have you tried monitoring the memory available on your GPUs while the training is running?", "A CUDA device-side assert triggered means a bad index error somewhere, and persists until you restart your kernel. The code you provide does not allow us to reproduce the bug because it uses a tokenizer and a datasets we don't have access to. There are thus multiple reasons for a bad index error. If you want us to help, you'll need to give a reproducer using a pretrained tokenizer of the hub and a dataset on the hub (for instance GLUE MRPC is great since it's tiny).\r\n\r\nTo debug your problem locally:\r\n```\r\nfor batch in trainer.get_train_dataloader():\r\n break\r\nmodel.cpu()(**batch)\r\n```\r\nas on the CPU you will get a clear indication of where the index error is.", "@sgugger I'll work on making the datasets public and will post here. In the meantime, I'll run your snippet. \r\n\r\n@LysandreJik , it's not a memory issue - all four GPUs are at ~87% volatile util for the duration.", "@sgugger @LysandreJik I've made our bucket public, and the relevant material is in gs://bao-ai/transfer; you should be able to pull stuff down in jupyter via:\r\n`!gsutil -m cp -r gs://bao-ai/transfer .`\r\n... though I haven't tried that command on an unauthenticated computer.\r\n\r\nAlso, note I'm having the same issue in Colab ([notebook here](https://colab.research.google.com/drive/1y-Tgl_zPJzjrzsq3WeYeUFL9A6hGhLhD?usp=sharing)), so I suspect it's an issue with the dataset as suggested above. @sgugger could you elaborate on what sort of issues you mean by 'bad index'? Would rebuilding the dataset from our source files help? If so, are there steps I can take to make sure indexing issues don't arise?", "Note I also periodically get the following error messages at train time:\r\n```\r\n/home/b/anaconda3/envs/transformers_3/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\r\n warnings.warn('Was asked to gather along dimension 0, but all '\r\n```\r\nThat's copied directly - it cuts off like that in Jupyter.", "I rebuilt the dataset using the following script, reloaded, and trained - still breaking with the same error message. Here's the rebuild code:\r\n```\r\nbalanced = pickle.load(open('./balanced_ds.pickle', 'rb'))\r\nbal_df = datasets.Dataset.from_pandas(pd.DataFrame.from_records(balanced, columns = ['txt', 'labels']))\r\n\r\nBLOCK_SIZE = 512\r\ntok = RobertaTokenizerFast.from_pretrained(\"./art_tok_onefile_roberta_tuned/\")\r\n\r\nds_tokenized_no_special = bal_df.map(lambda example: tok(example['txt'], \r\n padding='max_length', \r\n max_length=BLOCK_SIZE, \r\n truncation=True,\r\n add_special_tokens = False), batched=True)\r\n\r\nds_tokenized_no_special.save_to_disk('./art_unit_tokenized_balanced_rebuild')\r\n```\r\n\r\nThis uses the same imports (probably redundantly) as the main script. You can access all the data using `!gsutil -m cp -r gs://bao-ai/transfer .`, same as above.\r\n\r\n\r\nI'm going to loop through all the data in the dataloader and see if it's returning anything janky. We're expecting tensors of size BATCHxBLOCK_SIZE for attention_mask and input_ids, and a tensor of size BATCHx1 for labels, right? Our labels are currently just 0 or 1, depending on whether a tokenized json document falls within a certain document class (art unit 3600, to be precise - this work is for patent law analysis).", "Note, I found experimentally that special tokens have to be removed from the tokenizer in order to be properly passed through the RobertaForSequenceClassification model; otherwise, we get an index error in the torch.nn.embedding step, due to the vocabulary exceeding the Roberta vocab size by 2.", "On Colab, before the trainer crashes, I get lots of these messages in the runtime logs:\r\n```/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [374,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.```", "Ran the following checks on the entire dataset - it passed. Training still fails. This would suggest to me that the dataset is not actually the issue:\r\n```\r\ndef no_nans(t):\r\n return bool(t.flatten().isnan().sum() == 0)\r\n\r\ndef check_batch(ex):\r\n masks = ex['attention_mask']\r\n masks_expected_shape = torch.Size([BATCH_SIZE, BLOCK_SIZE])\r\n masks_are_expected_shape = (masks.shape == masks_expected_shape)\r\n\r\n allowed_masks = [0,1]\r\n only_allowed_masks = all(i in allowed_masks for i in masks.flatten())\r\n\r\n masks_not_nan = no_nans(masks)\r\n\r\n masks_valid = (masks_are_expected_shape and only_allowed_masks and masks_not_nan)\r\n\r\n ids = ex['input_ids']\r\n ids_expected_shape = torch.Size([BATCH_SIZE, BLOCK_SIZE])\r\n ids_are_expected_shape = (ids.shape == ids_expected_shape)\r\n\r\n ids_within_vocab_range = ((ids.max() < tok.vocab_size + 4) and (ids.min() >= 0))\r\n\r\n ids_not_nan = no_nans(ids)\r\n\r\n ids_valid = (ids_are_expected_shape and ids_within_vocab_range)\r\n\r\n\r\n labels = ex['labels']\r\n allowed_labels = [0,1]\r\n only_allowed_labels = all(i in allowed_labels for i in labels.flatten())\r\n\r\n labels_are_expected_shape = labels.shape == torch.Size([BATCH_SIZE])\r\n\r\n labels_not_nan = no_nans(labels)\r\n\r\n labels_valid = (only_allowed_labels and labels_are_expected_shape and labels_not_nan)\r\n\r\n failures = {\r\n 'masks_are_expected_shape': masks_are_expected_shape,\r\n 'only_allowed_masks': only_allowed_masks,\r\n 'masks_not_nan': masks_not_nan,\r\n #'masks_valid': masks_valid,\r\n 'ids_are_expected_shape': ids_are_expected_shape,\r\n 'ids_within_vocab_range': ids_within_vocab_range,\r\n 'ids_not_nan': ids_not_nan,\r\n #'ids_valid': ids_valid,\r\n 'only_allowed_labels': only_allowed_labels,\r\n 'labels_are_expected_shape': labels_are_expected_shape,\r\n 'labels_not_nan': labels_not_nan,\r\n #'labels_valid': labels_valid\r\n }\r\n\r\n \r\n return ((masks_valid and ids_valid and labels_valid), ex, failures)\r\n\r\nfailed = []\r\nfor idx, ex in tqdm(enumerate(iter(loader)), total=len(loader)):\r\n passed, ex, fail = check_batch(ex)\r\n if not passed:\r\n print(f'{idx} Failed!')\r\n failed.append((idx, ex, fail))\r\n```\r\n\r\n(This, again, uses all the above code as a base).", "I also deactivated the train test split and deleted the cache files in the dataset - also fails.", "Your check for ids in the proper change seems incorrect:\r\n```\r\nids_within_vocab_range = ((ids.max() < tok.vocab_size + 4) and (ids.min() >= 0))\r\n```\r\nwill allow for the indices `tok.vocab_size` to `tok.vocab_size+3` which are all going to generate an index error given the fact your model as `vocab_size = tok.vocab_size`.\r\n", "@sgugger vocab indices +1 thru +4 are the special tokens, though, right? And the model should be able to accept them, right?", "*or +0 thru +3 if we're being pythonic with our indexing", "The highest token index in the entire dataset is the pad token. That shouldn't throw an indexing error. Can you reproduce on your end?", "> Your check for ids in the proper change seems incorrect:\r\n> \r\n> ```\r\n> ids_within_vocab_range = ((ids.max() < tok.vocab_size + 4) and (ids.min() >= 0))\r\n> ```\r\n> \r\n> will allow for the indices `tok.vocab_size` to `tok.vocab_size+3` which are all going to generate an index error given the fact your model as `vocab_size = tok.vocab_size`.\r\n\r\nWhat should the proper check for this step be?", "And regardless of the checks - what would token indices have to do with the ultimate error, which is in clip_grad.py? \r\n\r\nTruncated from above:\r\n```\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-11-3435b262f1ae> in <module>()\r\n----> 1 trainer.train()\r\n\r\n1 frames\r\n/usr/local/lib/python3.6/dist-packages/torch/nn/utils/clip_grad.py in clip_grad_norm_(parameters, max_norm, norm_type)\r\n 36 total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type).to(device) for p in parameters]), norm_type)\r\n 37 clip_coef = max_norm / (total_norm + 1e-6)\r\n---> 38 if clip_coef < 1:\r\n 39 for p in parameters:\r\n 40 p.grad.detach().mul_(clip_coef.to(p.grad.device))\r\n\r\nRuntimeError: CUDA error: device-side assert triggered\r\n```", "I faced similar issue while running on colab with Linux OS . Ttried restarting and resetting the kernal error disappeared . ", "@shivaraj1994 can you define 'kernel'? Do you mean the Jupyter Kernel, the python Kernel, or the Linux/Mac/Windows kernel? \r\n\r\nThis problem was replicated both on Linux and on a Colab account; I don't think it's an issue with the operating system of a given computer. \r\n\r\n**NOTE:** I was able to fix the problem by ditching the datasets library and using the older pytorch dataset paradigm. Full code below (split between three files - note, I'm not importing the HF datasets library; rather, I'm importing a custom module called datasets.py, from the same directory in which I'm running train.py):\r\n\r\n# train.py\r\n```\r\nfrom transformers import (RobertaTokenizerFast,\r\n RobertaForSequenceClassification,\r\n RobertaConfig)\r\n\r\nfrom transformers import Trainer, TrainingArguments\r\n\r\nimport pickle\r\n\r\nfrom datasets import BalancedDataset\r\nfrom collators import DataCollatorForDocumentClassificationBATCH\r\n\r\n\r\n'''\r\nTODO:\r\nUse custom tok (w/ special tokens removed)\r\nTrain more layers\r\nUse longformer as drop-in for Roberta\r\nMake nicer dataset (with train/test split, etc...)\r\n'''\r\n\r\nBLOCK_SIZE = 512\r\nBATCH_SIZE = 32\r\n\r\nbalanced = pickle.load(open('./balanced_ds.pickle', 'rb'))\r\n\r\n\r\ntok = RobertaTokenizerFast.from_pretrained('roberta-base') #\"./art_tok_onefile_roberta_tuned/\")\r\n\r\nbal_ds = BalancedDataset(tok, balanced, BLOCK_SIZE)\r\n\r\ncollator = DataCollatorForDocumentClassificationBATCH()\r\n\r\n\r\nconfig = RobertaConfig.from_pretrained(\"roberta-base\",\r\n vocab_size=tok.vocab_size,\r\n max_position_embeddings=514,\r\n num_labels = 2)\r\n\r\nmodel = RobertaForSequenceClassification.from_pretrained('roberta-base',\r\n config=config)\r\n\r\n\r\n#Disable training on all but the Classification Head!\r\nfor param in model.base_model.parameters():\r\n param.requires_grad = False\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./roberta_train_test\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=5,\r\n per_device_train_batch_size=BATCH_SIZE,\r\n save_steps=150,\r\n save_total_limit=2,\r\n logging_steps=20,\r\n max_grad_norm = 5,\r\n dataloader_num_workers = 15,\r\n #fp16 = True #Enable low-precision via AMP\r\n)\r\n\r\n\r\ntrainer = Trainer(\r\n model = model,\r\n args = training_args,\r\n data_collator = collator,\r\n train_dataset = bal_ds\r\n)\r\n\r\ntrainer.train()\r\n```\r\n\r\n# datasets.py\r\n```\r\nimport torch\r\nfrom torch.utils.data.dataset import Dataset\r\nfrom tqdm import tqdm\r\n\r\n\r\n\r\nclass BalancedDataset(Dataset):\r\n def __init__(self, tokenizer, data, block_size: int, limit=None):\r\n self.block_size = block_size\r\n self.tok = tokenizer\r\n\r\n print('Ingesting data!')\r\n # Load Data\r\n self.txt = [i[0] for i in tqdm(data[:limit])]\r\n self.labels = torch.tensor([i[1] for i in tqdm(data[:limit])])\r\n\r\n def __len__(self):\r\n return len(self.txt)\r\n\r\n def __getitem__(self, item):\r\n d = self.tok(self.txt[item], padding='max_length',\r\n truncation=True, max_length=self.block_size,\r\n return_tensors='pt')\r\n d['labels'] = self.labels[item]\r\n return d\r\n```\r\n\r\n# collators.py\r\n```\r\nfrom dataclasses import dataclass\r\nfrom typing import Dict, List, Union\r\n\r\nimport torch\r\n\r\n\r\n@dataclass\r\nclass DataCollatorForDocumentClassificationBATCH:\r\n def __call__(\r\n self, examples: List[Union[List[int], torch.Tensor, Dict[str, torch.Tensor]]]\r\n ) -> Dict[str, torch.Tensor]:\r\n return {\r\n 'input_ids': torch.stack([e['input_ids'] for e in examples]).squeeze(),\r\n 'attention_mask': torch.stack([e['attention_mask'] for e in examples]).squeeze(),\r\n 'labels': torch.stack([e['labels'] for e in examples]),\r\n }\r\n\r\n```", "I will keep the bao-ai bucket open to the public for a bit longer so y'all can attempt to replicate the original issue. We still need to figure out what was causing the clip_grad_norm issue in the first place.\r\n\r\n", "> Also, note I'm having the same issue in Colab ([notebook here](https://colab.research.google.com/drive/1y-Tgl_zPJzjrzsq3WeYeUFL9A6hGhLhD?usp=sharing)), so I suspect it's an issue with the dataset as suggested above. @sgugger could you elaborate on what sort of issues you mean by 'bad index'? Would rebuilding the dataset from our source files help? If so, are there steps I can take to make sure indexing issues don't arise?\r\n\r\n**Replication note:** I continued to debug in that Colab notebook; if you want to replicate the original issue, you'll need to use the old code, not the code that currently exists at the end of that link.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.0.0 - Platform: Linux-5.4.0-53-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help @LysandreJik @sgugger ## Information Model I am using (Bert, XLNet ...): RoBERTa The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) trainer.train() runs for a bit, then fails with the following output: ``` RuntimeError Traceback (most recent call last) <ipython-input-11-3435b262f1ae> in <module> ----> 1 trainer.train() ~/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path, trial) 759 torch.nn.utils.clip_grad_norm_(amp.master_params(self.optimizer), self.args.max_grad_norm) 760 else: --> 761 torch.nn.utils.clip_grad_norm_(model.parameters(), self.args.max_grad_norm) 762 763 if is_torch_tpu_available(): ~/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/utils/clip_grad.py in clip_grad_norm_(parameters, max_norm, norm_type) 33 total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type).to(device) for p in parameters]), norm_type) 34 clip_coef = max_norm / (total_norm + 1e-6) ---> 35 if clip_coef < 1: 36 for p in parameters: 37 p.grad.detach().mul_(clip_coef.to(p.grad.device)) RuntimeError: CUDA error: device-side assert triggered ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Training RoBERTa for sequence classification from text to binary. ## To reproduce Steps to reproduce the behavior: 1. Load pre-processed dataset from disk using datasets.Dataset.load_from_disk() 2. Instantiate RoBERTa from pretrained (roberta-base) with config mods (num_labels = 2) 3. Create and run trainer. See full code below (most imports omitted). ``` from transformers import (RobertaTokenizerFast) BLOCK_SIZE = 512 tok = RobertaTokenizerFast.from_pretrained("./art_tok_onefile_roberta_tuned/") ds_root = '/media/b/My Passport/datasets/' tokenized = datasets.Dataset.load_from_disk(os.path.join(ds_root, 'art_unit_tokenized_balanced')) columns_to_return = ['input_ids', 'attention_mask', 'labels'] tokenized.set_format(type='torch', columns=columns_to_return) from transformers import RobertaForSequenceClassification config = RobertaConfig( vocab_size=tok.vocab_size, max_position_embeddings=514, num_labels = 2 ) config = RobertaConfig.from_pretrained("roberta-base", vocab_size=tok.vocab_size, max_position_embeddings=514, num_labels = 2) model = RobertaForSequenceClassification.from_pretrained('roberta-base', config=config) optimizer = torch.optim.Adam(model.parameters(), lr=1e-5) for param in model.base_model.parameters(): param.requires_grad = False from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./roberta_train_test", overwrite_output_dir=True, num_train_epochs=5, per_device_train_batch_size=128, save_steps=50, save_total_limit=2, logging_steps=10, #fp16 = True #Enable low-precision via AMP - omitted for now. ) train_test_bal = tokenized.train_test_split(test_size=0.1) trainer = Trainer( model=model, args=training_args, #data_collator=collate_fn, train_dataset=train_test_bal['train'] ) trainer.train() ``` ## Expected behavior The model trains for the duration of the training cycle.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8888/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8887
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8887/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8887/comments
https://api.github.com/repos/huggingface/transformers/issues/8887/events
https://github.com/huggingface/transformers/issues/8887
754,773,851
MDU6SXNzdWU3NTQ3NzM4NTE=
8,887
'Some weights of BertModel were not initialized from the model checkpoint at ./model and are newly initialized: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias']'
{ "login": "AlexanderTekle", "id": 11710567, "node_id": "MDQ6VXNlcjExNzEwNTY3", "avatar_url": "https://avatars.githubusercontent.com/u/11710567?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AlexanderTekle", "html_url": "https://github.com/AlexanderTekle", "followers_url": "https://api.github.com/users/AlexanderTekle/followers", "following_url": "https://api.github.com/users/AlexanderTekle/following{/other_user}", "gists_url": "https://api.github.com/users/AlexanderTekle/gists{/gist_id}", "starred_url": "https://api.github.com/users/AlexanderTekle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AlexanderTekle/subscriptions", "organizations_url": "https://api.github.com/users/AlexanderTekle/orgs", "repos_url": "https://api.github.com/users/AlexanderTekle/repos", "events_url": "https://api.github.com/users/AlexanderTekle/events{/privacy}", "received_events_url": "https://api.github.com/users/AlexanderTekle/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "maybe related to #8793 , hope could help.", "> maybe related to #8793 , hope could help.\r\n\r\nseems to be related. I get high variance in accuracy, I guessed it was probably because of the random initialization of those two weights.", "If you're using the `run_mlm.py`, then you're doing masked language modeling with the `BertForMaskedLM` model. This model does not make use of the pooler, hence why those two layers are randomly initialized. They're not used for predictions or training.", "@LysandreJik would it make more sense to load the saved model using BertModel.load_pretrained(saved_mlm_model) or would it be better to use BertModel.load(\"bert-base-uncased\") and copy the weights over from the saved model? ", "I also met this problem. Have you solved it?", "@wenHK It's actually not really relevant. The BertForMaskedLM model doesn't use the pooler layer, so thus why there are no weights assigned. You don't really need to worry about the warning.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
NONE
null
Hi everyone, I ran [ run_mlm.py ](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py) to continue pertaining uncased BERT directly from the examples on this repo, but once I load the newly saved pretrained Bert Model, I receive a warning - "'Some weights of BertModel were not initialized from the model checkpoint at ./model and are newly initialized: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias']'" I'm trying to fine-tune the model on a sentiment analysis task, but I'm getting horrible results and I wonder if it has something to do with this? Thanks for your help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8887/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8886
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8886/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8886/comments
https://api.github.com/repos/huggingface/transformers/issues/8886/events
https://github.com/huggingface/transformers/issues/8886
754,754,315
MDU6SXNzdWU3NTQ3NTQzMTU=
8,886
UnicodeEncodeError: surrogates not allowed with GPT2Tokenizer
{ "login": "g-karthik", "id": 3851993, "node_id": "MDQ6VXNlcjM4NTE5OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/3851993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/g-karthik", "html_url": "https://github.com/g-karthik", "followers_url": "https://api.github.com/users/g-karthik/followers", "following_url": "https://api.github.com/users/g-karthik/following{/other_user}", "gists_url": "https://api.github.com/users/g-karthik/gists{/gist_id}", "starred_url": "https://api.github.com/users/g-karthik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/g-karthik/subscriptions", "organizations_url": "https://api.github.com/users/g-karthik/orgs", "repos_url": "https://api.github.com/users/g-karthik/repos", "events_url": "https://api.github.com/users/g-karthik/events{/privacy}", "received_events_url": "https://api.github.com/users/g-karthik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Unfortunately, if simply printing the string is impossible, this is out of our expertise. You have probably already seen those threads, but they may help you debug what's going on:\r\n\r\nhttps://stackoverflow.com/questions/27366479/python-3-os-walk-file-paths-unicodeencodeerror-utf-8-codec-cant-encode-s\r\nhttps://stackoverflow.com/questions/38147259/how-to-work-with-surrogate-pairs-in-python\r\n\r\nLet us know if you find an answer!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
NONE
null
## Environment info - `transformers` version: 3.1.0 - Platform: EC2 - Python version: 3.6 ### Who can help @mfuntowicz @LysandreJik ## Information Model I am using: GPT-2 The problem arises when using the `GPT2Tokenizer` on a piece of text from a file that was written `utf-8` strings and is being opened in `utf-8`. ## To reproduce ``` tokenizer = GPT2Tokenizer.from_pretrained("gpt2-xl") text = "some utf-8 string" # this string is loaded from a file containing a dictionary {"text": "<some text>"} in each row - the file itself was written by converting TFRecords to text and "<some text>" was decoded explicitly to "utf-8" prior to being dumped into this dictionary and written text_ids = tokenizer.encode(text) ``` Stack trace I get: ``` Traceback (most recent call last): text_ids = tokenizer.encode(text) File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 1730, in encode **kwargs, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 2045, in encode_plus **kwargs, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 448, in _encode_plus first_ids = get_input_ids(text) File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 419, in get_input_ids tokens = self.tokenize(text, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 350, in tokenize tokenized_text = split_on_tokens(no_split_token, text) File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 344, in split_on_tokens for token in tokenized_text File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 344, in <genexpr> for token in tokenized_text File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_gpt2.py", line 237, in _tokenize self.byte_encoder[b] for b in token.encode("utf-8") UnicodeEncodeError: 'utf-8' codec can't encode characters in position 0-1: surrogates not allowed ``` ## Expected behavior It should tokenize and then convert the tokens to ids just fine, since `text` is a `utf-8` string. I'm trying to specifically identify the `text` itself from my file that leads to this error, but I am unable to print it either. I used a try-except block to catch the above `UnicodeEncodeError` and tried to print the `text`, but print itself expectedly failed because print is using the `ascii` codec. Is there a good way for me to identify the exact piece of text that led to this failure? Perhaps it'll help assist with debugging this issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8886/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8885
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8885/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8885/comments
https://api.github.com/repos/huggingface/transformers/issues/8885/events
https://github.com/huggingface/transformers/pull/8885
754,700,416
MDExOlB1bGxSZXF1ZXN0NTMwNTU0MDQx
8,885
[ci] skip doc jobs take #3
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "There has been no traction so far on the circleci forums, I filed a support ticket with cirlceci.", "So `pipeline.git.base_revision` is consistently undefined when making a PR via direct file edit on github.\r\n", "So far so good.\r\n\r\nAnd while monitoring I discovered an interesting thing. In this particular PR my check doesn't actually do what I thought it did. It doesn't check the range of commits from the beginning of PR. The range it checks is actually just for the last commit. That `pipeline.git.base_revision` is very unruly.\r\n \r\nYou can see a good example of it here: https://github.com/huggingface/transformers/pull/8918\r\n\r\nIf you look at the checks for the last few commits which are doc-only commits - the jobs are skipped, whereas any commit that had code in it is not skipped.\r\n\r\nSo actually this is better than what I intended. Since if we check the full range and there are code files and then there is a subsequent commit that has only docs changed in my vision it'd run the jobs normally. But this is better! Since this checks each commit and decides whether to run the jobs or not based on just that commit - this is much more efficient than my original intention.\r\n\r\nI hope I explained it clearly.\r\n\r\n**edit** Hmm, but what happens if several commits are pushed at once - which file range will it check - since normally it checks just the last commit - this I'm not sure about. `pipeline.git.base_revision` is a wild card it seems.", "Mmmm, that does mean that if a PR changes code then has a last commit that only changes the doc, it will appear green to us, correct?\r\nIf so, we should fine a way to correct this behavior as it will lull us (and the user) in a false sense that everything is alright.", "I will run tests once github works again and adjust accordingly.\r\n\r\nI'm also in touch with an engineer at circleCI via their support - so hopefully we will get some solid answers rather than needing to validate all the different circumstances.", "I wasn't able to reproduce it, but it's very clear that it happened, and this is not what we want.\r\n\r\nAnd while what I wrote here https://github.com/huggingface/transformers/pull/8885#issuecomment-738583812 is super-cool, it can't work since github relies on the last check for the overall status. So, we can only skip a job if *all* files in PR were docs.\r\n\r\nSo I merged a change which disabled that struggling new feature, but added a log instead to continue monitoring it while waiting for circleCI support to get back to me." ]
1,606
1,607
1,606
CONTRIBUTOR
null
@LysandreJik found another edge case when a developer force-pushes a change and `pipeline.git.base_revision` is defined but bogus, resulting in a range that returns no files. https://github.com/huggingface/transformers/pull/8853#issuecomment-736781950 So the proposed logic for take 3 is: 1. if pipeline.git.base_revision and pipeline.git.revision are defined 2. if git diff --name-only range returns anything 3. if what it returned in 2 is just docs 4. then skip Bottom line, we skip the test altogether if: ``` unless test -n "<< pipeline.git.base_revision >>" && test -n "<< pipeline.git.revision >>" \ && test -n "$(git diff --name-only << pipeline.git.base_revision >>...<< pipeline.git.revision >>)" ``` @LysandreJik, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8885/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8885", "html_url": "https://github.com/huggingface/transformers/pull/8885", "diff_url": "https://github.com/huggingface/transformers/pull/8885.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8885.patch", "merged_at": 1606921606000 }
https://api.github.com/repos/huggingface/transformers/issues/8884
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8884/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8884/comments
https://api.github.com/repos/huggingface/transformers/issues/8884/events
https://github.com/huggingface/transformers/pull/8884
754,678,836
MDExOlB1bGxSZXF1ZXN0NTMwNTM2MzE1
8,884
[s2s finetune_trainer] add instructions for distributed training
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,607
1,607
CONTRIBUTOR
null
This PR adds instructions for running finetune_trainer.py under dpp @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8884/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8884", "html_url": "https://github.com/huggingface/transformers/pull/8884", "diff_url": "https://github.com/huggingface/transformers/pull/8884.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8884.patch", "merged_at": 1607040356000 }
https://api.github.com/repos/huggingface/transformers/issues/8883
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8883/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8883/comments
https://api.github.com/repos/huggingface/transformers/issues/8883/events
https://github.com/huggingface/transformers/issues/8883
754,666,693
MDU6SXNzdWU3NTQ2NjY2OTM=
8,883
Extracting important information
{ "login": "krrishdholakia", "id": 17561003, "node_id": "MDQ6VXNlcjE3NTYxMDAz", "avatar_url": "https://avatars.githubusercontent.com/u/17561003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/krrishdholakia", "html_url": "https://github.com/krrishdholakia", "followers_url": "https://api.github.com/users/krrishdholakia/followers", "following_url": "https://api.github.com/users/krrishdholakia/following{/other_user}", "gists_url": "https://api.github.com/users/krrishdholakia/gists{/gist_id}", "starred_url": "https://api.github.com/users/krrishdholakia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/krrishdholakia/subscriptions", "organizations_url": "https://api.github.com/users/krrishdholakia/orgs", "repos_url": "https://api.github.com/users/krrishdholakia/repos", "events_url": "https://api.github.com/users/krrishdholakia/events{/privacy}", "received_events_url": "https://api.github.com/users/krrishdholakia/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!" ]
1,606
1,606
1,606
NONE
null
I'm trying to extract important information from a lecture transcript. What's the best way to go about doing this ? This would be without a particular query parameter, just generally important information in the global context of the lecture.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8883/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8882
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8882/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8882/comments
https://api.github.com/repos/huggingface/transformers/issues/8882/events
https://github.com/huggingface/transformers/pull/8882
754,648,568
MDExOlB1bGxSZXF1ZXN0NTMwNTExNzI1
8,882
[trainer] start using training_args.parallel_mode
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you for adding this new property, @sgugger - it has indeed improved the readability!" ]
1,606
1,606
1,606
CONTRIBUTOR
null
Following up on https://github.com/huggingface/transformers/pull/8877 which adds `training_args.parallel_mode` to make it easy to comprehend which mode the trainer is running under - this PR deploys the new property in a few places. @sgugger, have I deployed it as you envisioned it?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8882/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8882/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8882", "html_url": "https://github.com/huggingface/transformers/pull/8882", "diff_url": "https://github.com/huggingface/transformers/pull/8882.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8882.patch", "merged_at": 1606851637000 }
https://api.github.com/repos/huggingface/transformers/issues/8881
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8881/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8881/comments
https://api.github.com/repos/huggingface/transformers/issues/8881/events
https://github.com/huggingface/transformers/pull/8881
754,604,432
MDExOlB1bGxSZXF1ZXN0NTMwNDc1NDg0
8,881
Better warning when loading a tokenizer with AutoTokenizer w/o Sneten…
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks!" ]
1,606
1,607
1,606
MEMBER
null
…cePiece Currently, initializing a `sentencepiece` `AutoTokenizer` without having `sentencepiece` installed results in the following error: ``` AttributeError: 'NoneType' object has no attribute 'from_pretrained' ``` This improves the error message to: ``` This tokenizer cannot be instantiated. Please make sure you have `sentencepiece` installed in order to use this tokenizer. ``` Fix #8864
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8881/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8881/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8881", "html_url": "https://github.com/huggingface/transformers/pull/8881", "diff_url": "https://github.com/huggingface/transformers/pull/8881.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8881.patch", "merged_at": 1606846392000 }
https://api.github.com/repos/huggingface/transformers/issues/8880
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8880/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8880/comments
https://api.github.com/repos/huggingface/transformers/issues/8880/events
https://github.com/huggingface/transformers/pull/8880
754,565,590
MDExOlB1bGxSZXF1ZXN0NTMwNDQzOTU5
8,880
[PyTorch] Refactor Resize Token Embeddings
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "MobileBERT does this in the `tie_weights` function. Should we do the same here?", "ALBERT also does it in the `_resize_token_embeddings`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/a7d46a060930242cd1de7ead8821f6eeebb0cd06/src/transformers/models/albert/modeling_albert.py#L635-L639\r\n\r\nIt probably should have been done in that method for MobileBERT as well", "Fine by me :-) Should we do the mobileBERT change in this PR?", "> Fine by me :-) Should we do the mobileBERT change in this PR?\r\n\r\nwill do!", "@patrickvonplaten Not 100% sure that this fix works, forked and implemented the fixes as shown and receive the following error upon training:\r\n```\r\n Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.FloatTensor instead (while checking arguments for embedding). \r\n```\r\nIf I revert back to transformers-4.0.0 and run exactly the same script, with the same data then T5 trains fine. Could be user error, although I hope not! Did look back through the script and try and diagnose, but no luck as of yet.", "> @patrickvonplaten Not 100% sure that this fix works, forked and implemented the fixes as shown and receive the following error upon training:\r\n> \r\n> ```\r\n> Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.FloatTensor instead (while checking arguments for embedding). \r\n> ```\r\n> \r\n> If I revert back to transformers-4.0.0 and run exactly the same script, with the same data then T5 trains fine. Could be user error, although I hope not! Did look back through the script and try and diagnose, but no luck as of yet.\r\n\r\nCan you attach a simple code snippet showing what code produces your error? It's for T5 no? ", "> \r\n> \r\n> > @patrickvonplaten Not 100% sure that this fix works, forked and implemented the fixes as shown and receive the following error upon training:\r\n> > ```\r\n> > Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.FloatTensor instead (while checking arguments for embedding). \r\n> > ```\r\n> > \r\n> > \r\n> > If I revert back to transformers-4.0.0 and run exactly the same script, with the same data then T5 trains fine. Could be user error, although I hope not! Did look back through the script and try and diagnose, but no luck as of yet.\r\n> \r\n> Can you attach a simple code snippet showing what code produces your error? It's for T5 no?\r\n\r\nSure:\r\n\r\n```python\r\nfrom transformers import T5TokenizerFast, T5ForConditionalGeneration\r\ndev = \"cuda\"\r\n\r\nMODEL_NAME = 'google/t5-v1_1-base'\r\ntokenizer = T5TokenizerFast.from_pretrained('t5-base')\r\nspecial_tokens_dict = {'additional_special_tokens': ['<ORG>','<PERSON>']}\r\nnum_added_tokens = tokenizer.add_special_tokens(special_tokens_dict)\r\nprint(f'ADDED TOKENS: {num_added_tokens}')\r\nmodel = T5ForConditionalGeneration.from_pretrained(MODEL_NAME)\r\nmodel.resize_token_embeddings(len(tokenizer))\r\nmodel.to(dev)\r\nBATCH_SIZE = 8\r\n```\r\n\r\n```python\r\n#Sets the module in training mode\r\nfrom IPython.display import HTML, display\r\ndef progress(loss,value, max=100):\r\n return HTML(\"\"\" Batch loss :{loss} <progress \r\nvalue='{value}'max='{max}',style='width: 100%'>{value}\r\n </progress> \"\"\".format(loss=loss,value=value, max=max))\r\n\r\nmodel.train()\r\nnum_of_batches= int(len(train_df) / BATCH_SIZE)\r\nprint(num_of_batches)\r\nNUM_EPOCHS = 1\r\nloss_per_10_steps=[]\r\nloss_values = []\r\nfor epoch in range(1,NUM_EPOCHS+1):\r\n print('Running epoch: {}'.format(epoch))\r\n \r\n running_loss=0\r\n\r\n out = display(progress(1, num_of_batches+1), display_id=True)\r\n for i in range(num_of_batches):\r\n inputbatch=[]\r\n labelbatch=[]\r\n new_df=train_df[i*BATCH_SIZE:i*BATCH_SIZE+BATCH_SIZE]\r\n for indx,row in new_df.iterrows():\r\n input = 'Product: '+row['product_name']\r\n labels = row['product_description']\r\n inputbatch.append(input)\r\n labelbatch.append(labels)\r\n inputbatch=tokenizer.batch_encode_plus(inputbatch,padding=True, max_length=512,return_tensors='pt')[\"input_ids\"]\r\n labelbatch=tokenizer.batch_encode_plus(labelbatch,padding=True, max_length=512,return_tensors=\"pt\") [\"input_ids\"]\r\n inputbatch=inputbatch.to(dev)\r\n labelbatch=labelbatch.to(dev)\r\n\r\n # clear out the gradients of all Variables \r\n optimizer.zero_grad()\r\n\r\n # Forward propogation\r\n outputs = model(input_ids=inputbatch, labels=labelbatch)\r\n loss = outputs.loss\r\n loss_num=loss.item()\r\n logits = outputs.logits\r\n running_loss+=loss_num\r\n if i%10 ==0: \r\n loss_per_10_steps.append(loss_num)\r\n out.update(progress(loss_num,i, num_of_batches+1))\r\n\r\n # calculating the gradients\r\n loss.backward()\r\n\r\n #updating the params\r\n optimizer.step()\r\n \r\n loss_values.append(loss_num)\r\n running_loss=running_loss/int(num_of_batches)\r\n```\r\n\r\n", "> > > @patrickvonplaten Not 100% sure that this fix works, forked and implemented the fixes as shown and receive the following error upon training:\r\n> > > ```\r\n> > > Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.FloatTensor instead (while checking arguments for embedding). \r\n> > > ```\r\n> > > \r\n> > > \r\n> > > If I revert back to transformers-4.0.0 and run exactly the same script, with the same data then T5 trains fine. Could be user error, although I hope not! Did look back through the script and try and diagnose, but no luck as of yet.\r\n> > \r\n> > \r\n> > Can you attach a simple code snippet showing what code produces your error? It's for T5 no?\r\n> \r\n> Sure:\r\n> \r\n> ```\r\n> MODEL_NAME = 'google/t5-v1_1-base'\r\n> tokenizer = T5TokenizerFast.from_pretrained('t5-base')\r\n> special_tokens_dict = {'additional_special_tokens': ['<ORG>','<PERSON>']}\r\n> num_added_tokens = tokenizer.add_special_tokens(special_tokens_dict)\r\n> print(f'ADDED TOKENS: {num_added_tokens}')\r\n> model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True)\r\n> model.resize_token_embeddings(len(tokenizer))\r\n> model.to(dev)\r\n> BATCH_SIZE = 8\r\n> ```\r\n> \r\n> ```\r\n> #Sets the module in training mode\r\n> from IPython.display import HTML, display\r\n> def progress(loss,value, max=100):\r\n> return HTML(\"\"\" Batch loss :{loss} <progress \r\n> value='{value}'max='{max}',style='width: 100%'>{value}\r\n> </progress> \"\"\".format(loss=loss,value=value, max=max))\r\n> \r\n> model.train()\r\n> num_of_batches= int(len(train_df) / BATCH_SIZE)\r\n> print(num_of_batches)\r\n> NUM_EPOCHS = 1\r\n> loss_per_10_steps=[]\r\n> loss_values = []\r\n> for epoch in range(1,NUM_EPOCHS+1):\r\n> print('Running epoch: {}'.format(epoch))\r\n> \r\n> running_loss=0\r\n> \r\n> out = display(progress(1, num_of_batches+1), display_id=True)\r\n> for i in range(num_of_batches):\r\n> inputbatch=[]\r\n> labelbatch=[]\r\n> new_df=train_df[i*BATCH_SIZE:i*BATCH_SIZE+BATCH_SIZE]\r\n> for indx,row in new_df.iterrows():\r\n> input = 'Product: '+row['product_name']\r\n> labels = row['product_description']\r\n> inputbatch.append(input)\r\n> labelbatch.append(labels)\r\n> inputbatch=tokenizer.batch_encode_plus(inputbatch,padding=True, max_length=512,return_tensors='pt')[\"input_ids\"]\r\n> labelbatch=tokenizer.batch_encode_plus(labelbatch,padding=True, max_length=512,return_tensors=\"pt\") [\"input_ids\"]\r\n> inputbatch=inputbatch.to(dev)\r\n> labelbatch=labelbatch.to(dev)\r\n> \r\n> # clear out the gradients of all Variables \r\n> optimizer.zero_grad()\r\n> \r\n> # Forward propogation\r\n> outputs = model(input_ids=inputbatch, labels=labelbatch)\r\n> loss = outputs.loss\r\n> loss_num=loss.item()\r\n> logits = outputs.logits\r\n> running_loss+=loss_num\r\n> if i%10 ==0: \r\n> loss_per_10_steps.append(loss_num)\r\n> out.update(progress(loss_num,i, num_of_batches+1))\r\n> \r\n> # calculating the gradients\r\n> loss.backward()\r\n> \r\n> #updating the params\r\n> optimizer.step()\r\n> \r\n> loss_values.append(loss_num)\r\n> running_loss=running_loss/int(num_of_batches)\r\n> ```\r\n\r\nthanks! `dev` would be equal to `\"cuda\"` I suppose? ", "> \r\n> \r\n> > > > @patrickvonplaten Not 100% sure that this fix works, forked and implemented the fixes as shown and receive the following error upon training:\r\n> > > > ```\r\n> > > > Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.FloatTensor instead (while checking arguments for embedding). \r\n> > > > ```\r\n> > > > \r\n> > > > \r\n> > > > If I revert back to transformers-4.0.0 and run exactly the same script, with the same data then T5 trains fine. Could be user error, although I hope not! Did look back through the script and try and diagnose, but no luck as of yet.\r\n> > > \r\n> > > \r\n> > > Can you attach a simple code snippet showing what code produces your error? It's for T5 no?\r\n> > \r\n> > \r\n> > Sure:\r\n> > ```\r\n> > MODEL_NAME = 'google/t5-v1_1-base'\r\n> > tokenizer = T5TokenizerFast.from_pretrained('t5-base')\r\n> > special_tokens_dict = {'additional_special_tokens': ['<ORG>','<PERSON>']}\r\n> > num_added_tokens = tokenizer.add_special_tokens(special_tokens_dict)\r\n> > print(f'ADDED TOKENS: {num_added_tokens}')\r\n> > model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True)\r\n> > model.resize_token_embeddings(len(tokenizer))\r\n> > model.to(dev)\r\n> > BATCH_SIZE = 8\r\n> > ```\r\n> > \r\n> > \r\n> > ```\r\n> > #Sets the module in training mode\r\n> > from IPython.display import HTML, display\r\n> > def progress(loss,value, max=100):\r\n> > return HTML(\"\"\" Batch loss :{loss} <progress \r\n> > value='{value}'max='{max}',style='width: 100%'>{value}\r\n> > </progress> \"\"\".format(loss=loss,value=value, max=max))\r\n> > \r\n> > model.train()\r\n> > num_of_batches= int(len(train_df) / BATCH_SIZE)\r\n> > print(num_of_batches)\r\n> > NUM_EPOCHS = 1\r\n> > loss_per_10_steps=[]\r\n> > loss_values = []\r\n> > for epoch in range(1,NUM_EPOCHS+1):\r\n> > print('Running epoch: {}'.format(epoch))\r\n> > \r\n> > running_loss=0\r\n> > \r\n> > out = display(progress(1, num_of_batches+1), display_id=True)\r\n> > for i in range(num_of_batches):\r\n> > inputbatch=[]\r\n> > labelbatch=[]\r\n> > new_df=train_df[i*BATCH_SIZE:i*BATCH_SIZE+BATCH_SIZE]\r\n> > for indx,row in new_df.iterrows():\r\n> > input = 'Product: '+row['product_name']\r\n> > labels = row['product_description']\r\n> > inputbatch.append(input)\r\n> > labelbatch.append(labels)\r\n> > inputbatch=tokenizer.batch_encode_plus(inputbatch,padding=True, max_length=512,return_tensors='pt')[\"input_ids\"]\r\n> > labelbatch=tokenizer.batch_encode_plus(labelbatch,padding=True, max_length=512,return_tensors=\"pt\") [\"input_ids\"]\r\n> > inputbatch=inputbatch.to(dev)\r\n> > labelbatch=labelbatch.to(dev)\r\n> > \r\n> > # clear out the gradients of all Variables \r\n> > optimizer.zero_grad()\r\n> > \r\n> > # Forward propogation\r\n> > outputs = model(input_ids=inputbatch, labels=labelbatch)\r\n> > loss = outputs.loss\r\n> > loss_num=loss.item()\r\n> > logits = outputs.logits\r\n> > running_loss+=loss_num\r\n> > if i%10 ==0: \r\n> > loss_per_10_steps.append(loss_num)\r\n> > out.update(progress(loss_num,i, num_of_batches+1))\r\n> > \r\n> > # calculating the gradients\r\n> > loss.backward()\r\n> > \r\n> > #updating the params\r\n> > optimizer.step()\r\n> > \r\n> > loss_values.append(loss_num)\r\n> > running_loss=running_loss/int(num_of_batches)\r\n> > ```\r\n> \r\n> thanks! `dev` would be equal to `\"cuda\"` I suppose?\r\n\r\nYeah \"cuda\" sorry.", "Could you also attach some code for `train_df` and `optimizer`? So that I can fully reproduce :-) ", "> \r\n> \r\n> Could you also attach some code for `train_df` and `optimizer`? So that I can fully reproduce :-)\r\n\r\nSure!\r\n```\r\noptimizer = Adafactor(model.parameters(),lr=1e-3,\r\n eps=(1e-30, 1e-3),\r\n clip_threshold=1.0,\r\n decay_rate=-0.8,\r\n beta1=None,\r\n weight_decay=0.0,\r\n relative_step=False,\r\n scale_parameter=False,\r\n warmup_init=False)\r\n```\r\n\r\ntrain_df is just a dataframe containing something like the following:\r\n product_name\tproduct_description\r\n37245\tTest Product 1\tTest Description 1\r\n23451\tTest Product 2 Test Description 2 \r\n\r\nNot sure how to attach a file via GitHub my apologies.\r\n", "> @patrickvonplaten Not 100% sure that this fix works, forked and implemented the fixes as shown and receive the following error upon training:\r\n> \r\n> ```\r\n> Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.FloatTensor instead (while checking arguments for embedding). \r\n> ```\r\n> \r\n> If I revert back to transformers-4.0.0 and run exactly the same script, with the same data then T5 trains fine. Could be user error, although I hope not! Did look back through the script and try and diagnose, but no luck as of yet.\r\n\r\nI tried with some dummy training data and it works for me...not sure what the problem is. Also the error message hints at a wrong `dtype` of either `input_ids` or `labels`...\r\n\r\nCould you try to do the following:\r\n\r\n```python\r\n inputbatch=inputbatch.to(dev).to(torch.long)\r\n labelbatch=labelbatch.to(dev).to(torch.long)\r\n```\r\n\r\nand see if the error persists?", "> ```python\r\n> .to(torch.long)\r\n> ```\r\n\r\nnever mind, I can reproduce! Thanks for the message! Will see how to fix it -> weird error", "> \r\n> \r\n> > ```python\r\n> > .to(torch.long)\r\n> > ```\r\n> \r\n> never mind, I can reproduce! Thanks for the message! Will see how to fix it -> weird error\r\n\r\nGreat thank you :) \r\n\r\nPrevious issue describing the same error #7026 . Gave me some guidance but couldn't quite work it out.", "> > > ```python\r\n> > > .to(torch.long)\r\n> > > ```\r\n> > \r\n> > \r\n> > never mind, I can reproduce! Thanks for the message! Will see how to fix it -> weird error\r\n> \r\n> Great thank you :)\r\n> \r\n> Previous issue describing the same error #7026 . Gave me some guidance but couldn't quite work it out.\r\n\r\nShould be good now - was 100% introduces by this PR -> thanks a lot for spotting it!", "> \r\n> \r\n> > > > ```python\r\n> > > > .to(torch.long)\r\n> > > > ```\r\n> > > \r\n> > > \r\n> > > never mind, I can reproduce! Thanks for the message! Will see how to fix it -> weird error\r\n> > \r\n> > \r\n> > Great thank you :)\r\n> > Previous issue describing the same error #7026 . Gave me some guidance but couldn't quite work it out.\r\n> \r\n> Should be good now - was 100% introduces by this PR -> thanks a lot for spotting it!\r\n\r\nAmazing thank you will run through a test this evening. ", "@sgugger @LysandreJik - I updated the PR description. It's good to merge for me. Let me know what you think." ]
1,606
1,606
1,606
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR extends the `resize_embeddings` function in PyTorch to models that have input/output embeddings that are **not** tied. In PyTorch all models that have tied input/output embeddings by default can also untie those embeddings by setting `config.tie_word_embeddings=False`. This however requires the `_resize_token_embeddings` to be extended to also resize the `lm_head`. This PR does this extension by adding a `_get_resized_lm_head` method. Also, all models that have a `get_output_embedding()` function, now need a `set_output_embedding()` function. A test is added to make sure the new functionality works as expected. The Bart-like models currently skip this test because there is a rather weird `lm_head` behavior that I want to refactor in another PR. In addition this PR: - Fixes #8706: With MT5 and T5v1_1, T5 now has a configuration where input and output embeddings are not tied anymore. This PR fixes this. - Refactors MobileBert ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8880/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8880/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8880", "html_url": "https://github.com/huggingface/transformers/pull/8880", "diff_url": "https://github.com/huggingface/transformers/pull/8880.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8880.patch", "merged_at": 1606933190000 }
https://api.github.com/repos/huggingface/transformers/issues/8879
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8879/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8879/comments
https://api.github.com/repos/huggingface/transformers/issues/8879/events
https://github.com/huggingface/transformers/issues/8879
754,557,183
MDU6SXNzdWU3NTQ1NTcxODM=
8,879
dropout(): argument 'input' (position 1) must be Tensor, not str With Bert
{ "login": "Tashsub", "id": 44523844, "node_id": "MDQ6VXNlcjQ0NTIzODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/44523844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tashsub", "html_url": "https://github.com/Tashsub", "followers_url": "https://api.github.com/users/Tashsub/followers", "following_url": "https://api.github.com/users/Tashsub/following{/other_user}", "gists_url": "https://api.github.com/users/Tashsub/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tashsub/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tashsub/subscriptions", "organizations_url": "https://api.github.com/users/Tashsub/orgs", "repos_url": "https://api.github.com/users/Tashsub/repos", "events_url": "https://api.github.com/users/Tashsub/events{/privacy}", "received_events_url": "https://api.github.com/users/Tashsub/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "having the **same problem**, what is happening, it was working just fine for the past like 90 days!!\r\n\r\n", "Hello! It would be very helpful if you could complete the information related to your environment. If you could have a reproducible code example, that would really be great as well.\r\n\r\nIt is possible you were affected by the breaking changes from v3.x to v4.x. If this is the case, I invite you to read the [migration notes](https://huggingface.co/transformers/migration.html), or to pin the transformers library to the major version 3: `pip install transformers==3`", "https://github.com/mosh98/Swedish_Sentiment_BERTIL/blob/main/Swe_Bert_Training_Bigger_dataset.ipynb", "I am afraid I see no error in your notebook.", "okej thanks for the tip, pinning it to version 3 did the trick!", "Thanks @LysandreJik, \r\n\r\nIt works but creates a new error here that says: \r\n\r\nError(s) in loading state_dict for SentimentClassifier:\r\n\tUnexpected key(s) in state_dict: \"bert.embeddings.position_ids\". \r\n[Notebook \r\n](https://colab.research.google.com/drive/1fEXY3IQ82u41KvwoDOg-oY95RDD1OpKg?usp=sharing)\r\n\r\n**After running the code below**: \r\n\r\n```\r\nsaved_model = torch.load('selective_stock_dataset_state-2.bin')\r\nmodel = SentimentClassifier(len(class_names))\r\nmodel.load_state_dict(saved_model)\r\nmodel = model.to(device)\r\n\r\n```\r\n\r\n\r\n\r\n```\r\nclass SentimentClassifier(nn.Module): \r\n def __init__(self, n_classes):\r\n super(SentimentClassifier, self).__init__()\r\n self.bert = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME)\r\n self.drop = nn.Dropout(p=0.3)\r\n self.out = nn.Linear(self.bert.config.hidden_size, n_classes)\r\n self.softmax = nn.Softmax(dim=1)\r\n\r\n def forward(self, input_ids, attention_mask):\r\n _, pooled_output = self.bert(\r\n input_ids=input_ids,\r\n attention_mask=attention_mask\r\n )\r\n output = self.drop(pooled_output)\r\n output = self.out(output)\r\n return self.softmax(output)\r\n```\r\n\r\nCould you advise. Please and thanks ", "as he mentioned earlier, try using `pip install transformers==3`", "@mosh98 , I have tried with `pip install transformers==3` and it removed the first error. \r\n\r\nBut i then get a new error that I was no getting before that says \r\n\r\n`Error(s) in loading state_dict for SentimentClassifier:\r\nUnexpected key(s) in state_dict: \"bert.embeddings.position_ids\".`\r\n\r\nsee my notebook here: [notebook](https://colab.research.google.com/drive/1fEXY3IQ82u41KvwoDOg-oY95RDD1OpKg?usp=sharing#scrollTo=iQ93LDzMXO58l)", "Ahh it was solved by changing \r\n\r\n`model.load_state_dict(saved_model)`\r\n\r\nto \r\n\r\n`model.load_state_dict(saved_model, strict=False)`\r\n", "Hi, indeed, this is a different error. We recommend using the `from_pretrained` method (your custom model would need to inherit from `PreTrainedModel` rather than `nn.Module`) rather than using `load_state_dict` to ensure maximum compatibility between checkpoints and architectures, otherwise the state dicts might not be 100% loadable on each custom architecture.\r\n\r\nYour workaround using `strict=False` also works!", "\"\"\"pip install transformers==3\"\"\" doesnt seem to work\r\n", "No need to downgrade the transformers. Just do the following - it's from the migration guide.\r\n\r\n```\r\nmodel = BertModel.from_pretrained(\"bert-base-cased\")\r\noutputs = model(**inputs, return_dict=False)\r\n```\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "`outputs = model(**inputs, return_dict=False)`\r\n\r\nor\r\n\r\n`model = BertModel.from_pretrained(\"bert-base-cased\",return_dict=False)`", "> `outputs = model(**inputs, return_dict=False)`\r\n> \r\n> or\r\n> \r\n> `model = BertModel.from_pretrained(\"bert-base-cased\",return_dict=False)`\r\n\r\ncool, it works.", "> > `outputs = model(**inputs, return_dict=False)`\r\n> > or\r\n> > `model = BertModel.from_pretrained(\"bert-base-cased\",return_dict=False)`\r\n> \r\n> cool, it works.\r\n\r\ngreat! it worked for me too, thanks a million :) ", "Still does not work for me transformers 4.24", "hi \r\ni am also facing the same issue. i am applying transformer on image data. i did the training. i haven't face any issues while training. but in prediction it throws the error\r\nTypeError: dropout(): argument 'input' (position 1) must be Tensor, not NoneType\r\n\r\nplease resolve if possible", "Hey @dishamohini could you make sure to do the following:\r\n- check that you are using the latest version of `transformers`\r\n- check that you are correctly running the model in eval mode\r\nIf you still have an issue:\r\n- open a new issue on `transformers`, following the contribution guidelines with the output of `transformers-cli envs`, and a full minimal reproducer as well as the trace-back \r\n- ping me there" ]
1,606
1,694
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: google colab - Python version: 3 - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): Bert @LysandreJik @jplu The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) I am trying to do sentiment analysis using Bert. My code was working perfectly fine and then last night I tried to run it without changing anything and I am getting the following error message: "dropout(): argument 'input' (position 1) must be Tensor, not str" I trained my Bert model and saved the bin file. This occurs when I load the bin file into collab and try to predict the sentiment of any text. ## To reproduce Steps to reproduce the behavior: 1. Loaded my model that was saved in a bin file in google colab 2. Ran the following code: `def conclude_sentiment(text): encoded_review = tokenizer.encode_plus( text, max_length=MAX_LEN, add_special_tokens=True, return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', ) input_ids = encoded_review['input_ids'].to(device) attention_mask = encoded_review['attention_mask'].to(device) output = model(input_ids, attention_mask) _, prediction = torch.max(output, dim=1) #print(f'Review text: {text}') #print(f'Sentiment : {class_names[prediction]}') return class_names[prediction]` 3. Got an error that says `/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in dropout(input, p, training, inplace) 981 return (_VF.dropout_(input, p, training) 982 if inplace --> 983 else _VF.dropout(input, p, training)) 984 985 TypeError: dropout(): argument 'input' (position 1) must be Tensor, not str` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> An output of either 'positive' or 'negative' when a string is passed into the method named 'conclude_sentiment'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8879/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8878
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8878/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8878/comments
https://api.github.com/repos/huggingface/transformers/issues/8878/events
https://github.com/huggingface/transformers/pull/8878
754,554,228
MDExOlB1bGxSZXF1ZXN0NTMwNDM0NzMz
8,878
Better support for resuming training
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
COLLABORATOR
null
# What does this PR do? This PR adds two things linked to resuming training: 1. It brings full reproducibility when resuming an interrupted training from a checkpoint (i.e., resuming a training from a checkpoint will give the exact same results as a training from the beginning with the same seeding). This was not currently the case because the dataloader shuffle was not triggered `epochs_already_trained` times, so the shuffle of the dataloader was the same as epoch 0. So the full reproducibility was only there for trainings resumed from an early checkpoint (during the first epoch). 2. It also adds the option to ignore that data skipping which can take a very long time on a large dataset. This will go faster but yield different results from a training from scratch. Fixes #8874 and #8876
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8878/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8878", "html_url": "https://github.com/huggingface/transformers/pull/8878", "diff_url": "https://github.com/huggingface/transformers/pull/8878.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8878.patch", "merged_at": 1606848322000 }
https://api.github.com/repos/huggingface/transformers/issues/8877
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8877/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8877/comments
https://api.github.com/repos/huggingface/transformers/issues/8877/events
https://github.com/huggingface/transformers/pull/8877
754,535,720
MDExOlB1bGxSZXF1ZXN0NTMwNDE5MzYx
8,877
Add a `parallel_mode` property to TrainingArguments
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Given our discussion yesterday, I'm not sure `distributed_env` is fitting. As you convinced me that DP is not distributed when it comes to pytorch conventions, `if self.distributed_env == \"dp\"` is back to being confusing.\r\n\r\nGiven that with the exception of tpu, all dp/ddp/mp/pp are SomethingParallel, should it be called `parallel_mode`?\r\n\r\nI don't know anything about tpu, so it's hard for me to know where it fits. But it's probably not distributed either. And not parallel either.\r\n\r\nSo perhaps we call it `compute_env`", "LGTM, @sgugger!\r\n" ]
1,606
1,606
1,606
COLLABORATOR
null
# What does this PR do? This PR adds a `distributed_env` property to the `TrainingArugments` making it clear if we are in: - a single process (CPU or one GPU) - a parallel setting (one process but several GPUs) - a distributed parallel setting (several processes, one per GPU) - a TPU setting Fixes #8858
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8877/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8877/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8877", "html_url": "https://github.com/huggingface/transformers/pull/8877", "diff_url": "https://github.com/huggingface/transformers/pull/8877.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8877.patch", "merged_at": 1606848370000 }
https://api.github.com/repos/huggingface/transformers/issues/8876
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8876/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8876/comments
https://api.github.com/repos/huggingface/transformers/issues/8876/events
https://github.com/huggingface/transformers/issues/8876
754,524,805
MDU6SXNzdWU3NTQ1MjQ4MDU=
8,876
Resume training from checkpoint: not progressing
{ "login": "mattivi", "id": 1651448, "node_id": "MDQ6VXNlcjE2NTE0NDg=", "avatar_url": "https://avatars.githubusercontent.com/u/1651448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mattivi", "html_url": "https://github.com/mattivi", "followers_url": "https://api.github.com/users/mattivi/followers", "following_url": "https://api.github.com/users/mattivi/following{/other_user}", "gists_url": "https://api.github.com/users/mattivi/gists{/gist_id}", "starred_url": "https://api.github.com/users/mattivi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mattivi/subscriptions", "organizations_url": "https://api.github.com/users/mattivi/orgs", "repos_url": "https://api.github.com/users/mattivi/repos", "events_url": "https://api.github.com/users/mattivi/events{/privacy}", "received_events_url": "https://api.github.com/users/mattivi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It is expected that this would take some time, since it has to skip through `204,516` batches before continuing training. It will continue progressing after that skip is done.", "In the PR mentioned above, I'm adding a flag to ignore that step if you're prepared to pay the price of having the training be slightly different from a training from scratch to go faster.", "That would do for my case, thanks!", "With that PR everything worked as expected, thanks for the very quick turnaround!", "Happy to hear!", "@sgugger Sorry to bother, but I am wondering why skipping steps takes computing. I mean, the random_seed is specified, so the trainer just need to find the breakpoint of an epoch and resume, I shouldn't take much time.\r\n\r\nSo does any parts of my understand are wrong?", "Yes, but there is no way to be in the exact sample place in the dataloaders (that have randomness with the shuffling) without going through the first epochs and then batches.", "@sgugger thanks for the information, sorry to revive this issue. How long does it usually take to go through the first epochs and then batches? half of what it took to train until that point or less?", "It depends on your data and the time needed for your preprocessing. Note that there is a progress bar in the newer versions of Transformers so you can get a sense of the remaining time. You can also skip this with the [flag `ignore_data_skip`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.ignore_data_skip) though the model will train on already seen data in this case.", "I have the same issue,\r\n```\r\n***** Running training *****\r\n Num examples = 2,560,000\r\n Num Epochs = 9,223,372,036,854,775,807\r\n Instantaneous batch size per device = 8\r\n Total train batch size (w. parallel, distributed & accumulation) = 16\r\n Gradient Accumulation steps = 2\r\n Total optimization steps = 160,000\r\n Number of trainable parameters = 332,891,919\r\n Continuing training from checkpoint, will skip to saved global_step\r\n Continuing training from epoch 0\r\n Continuing training from global step 77000\r\n Will skip the first 0 epochs then the first 154000 batches in the first epoch.\r\n 0%| | 0/160000 [00:00<?, ?it/s]\r\n The following columns in the training set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: locale, audio, input_length, sentence. If locale, audio, input_length, sentence are not expected by `Wav2Vec2ForCTC.forward`, you can safely ignore this message.\r\nThere seems to be not a single sample in your epoch_iterator, stopping training at step 77000! This is expected if you're using an IterableDataset and set num_steps (160000) higher than the number of available samples.\r\n\r\n\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\n{'train_runtime': 1869.2894, 'train_samples_per_second': 1369.504, 'train_steps_per_second': 85.594, 'train_loss': 0.0, 'epoch': 43.01} \r\n 0%| | 0/160000 [31:09<?, ?it/s]Saving model checkpoint to /usr/local/bin/source/output\r\nConfiguration saved in /usr/local/bin/source/output/config.json\r\nModel weights saved in /usr/local/bin/source/output/pytorch_model.bin\r\nFeature extractor saved in /usr/local/bin/source/output/preprocessor_config.json\r\ntokenizer config file saved in /usr/local/bin/source/output/tokenizer_config.json\r\nSpecial tokens file saved in /usr/local/bin/source/output/special_tokens_map.json\r\nadded tokens file saved in /usr/local/bin/source/output/added_tokens.json\r\ntrainer save model!\r\nmetric: train_runtime\r\n***** train metrics *****\r\n epoch = 43.01\r\n train_loss = 0.0\r\n train_runtime = 0:31:09.28\r\n train_samples_per_second = 1369.504\r\n train_steps_per_second = 85.594\r\n06/19/2023 08:28:34 - INFO - __main__ - *** Evaluate ***\r\n***** Running Evaluation *****\r\n Num examples: Unknown\r\n Batch size = 8\r\nThe following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: locale, audio, input_length, sentence. If locale, audio, input_length, sentence are not expected by `Wav2Vec2ForCTC.forward`, you can safely ignore this message.\r\nDropping the following result as it does not have all the necessary fields:\r\n{'task': {'name': 'Automatic Speech Recognition', 'type': 'automatic-speech-recognition'}, 'metrics': [{'name': 'Wer', 'type': 'wer', 'value': 1.0006485084306096}]}\r\n```\r\n\r\nmax_steps= 160000\r\nlast checkpoint=77000\r\n\r\nif **ignore_data_skip**=True is set, it can resume training correctly." ]
1,606
1,687
1,606
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.0.0 - Platform: Linux-3.10.0-514.el7.x86_64-x86_64-with-centos-7.8.2003-Core - Python version: 3.7.2 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @sgugger ## Information Model I am using (Bert, XLNet ...): BERT The problem arises when using: * [x] the official example scripts: /examples/language-modeling/run_mlm.py * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: BERT MLM pre-training with own dataset ## To reproduce Steps to reproduce the behavior: 1. Run script run_mlm.py, training from scratch, and save a checkpoint. 2. Stop the training. 3. Restore the training from the checkpoint, e.g. with the code below 4. When restoring, the pre-training process is not progressing (since hours). ```cmd python run_mlm.py --model_type bert --model_name_or_path /bert-base-v2/checkpoint-204516/ --overwrite_output_dir --config_name /bert-base-v2-config/ --tokenizer_name /bert-base-v2-config/ --train_file /train_subset.txt --validation_file /eval_subset.txt --do_train --do_eval --line_by_line --output_dir /bert-base-v2/ --cache_dir /tmp/ --save_total_limit 300 --num_train_epochs 10 --warmup_steps 10000 --logging_steps 5000 --save_steps 11362 --per_device_train_batch_size 128 --per_device_eval_batch_size 128 --seed 42 ``` Output is ``` 12/01/2020 15:43:28 - INFO - __main__ - Loading tokenized dataset from file... 12/01/2020 15:47:22 - INFO - __main__ - Done. [INFO|trainer.py:357] 2020-12-01 15:47:29,458 >> The following columns in the training set don't have a corresponding argument in `BertForMaskedLM.forward` and have been ignored: special_tokens_mask. [INFO|trainer.py:357] 2020-12-01 15:47:29,459 >> The following columns in the evaluation set don't have a corresponding argument in `BertForMaskedLM.forward` and have been ignored: special_tokens_mask. [INFO|trainer.py:662] 2020-12-01 15:47:32,843 >> ***** Running training ***** [INFO|trainer.py:663] 2020-12-01 15:47:32,843 >> Num examples = 145434960 [INFO|trainer.py:664] 2020-12-01 15:47:32,843 >> Num Epochs = 10 [INFO|trainer.py:665] 2020-12-01 15:47:32,843 >> Instantaneous batch size per device = 128 [INFO|trainer.py:666] 2020-12-01 15:47:32,843 >> Total train batch size (w. parallel, distributed & accumulation) = 128 [INFO|trainer.py:667] 2020-12-01 15:47:32,843 >> Gradient Accumulation steps = 1 [INFO|trainer.py:668] 2020-12-01 15:47:32,843 >> Total optimization steps = 11362110 [INFO|trainer.py:681] 2020-12-01 15:47:32,846 >> Continuing training from checkpoint, will skip to saved global_step [INFO|trainer.py:682] 2020-12-01 15:47:32,846 >> Continuing training from epoch 0 [INFO|trainer.py:683] 2020-12-01 15:47:32,846 >> Continuing training from global step 204516 [INFO|trainer.py:684] 2020-12-01 15:47:32,846 >> Will skip the first 204516 batches in the first epoch 0%| | 0/11362110 [00:00<?, ?it/s] ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Would expect the training to restore from 204516 and continue training.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8876/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8875
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8875/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8875/comments
https://api.github.com/repos/huggingface/transformers/issues/8875/events
https://github.com/huggingface/transformers/pull/8875
754,505,834
MDExOlB1bGxSZXF1ZXN0NTMwMzk0NjM5
8,875
Fix mlflow parameter overflow
{ "login": "noise-field", "id": 14188757, "node_id": "MDQ6VXNlcjE0MTg4NzU3", "avatar_url": "https://avatars.githubusercontent.com/u/14188757?v=4", "gravatar_id": "", "url": "https://api.github.com/users/noise-field", "html_url": "https://github.com/noise-field", "followers_url": "https://api.github.com/users/noise-field/followers", "following_url": "https://api.github.com/users/noise-field/following{/other_user}", "gists_url": "https://api.github.com/users/noise-field/gists{/gist_id}", "starred_url": "https://api.github.com/users/noise-field/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/noise-field/subscriptions", "organizations_url": "https://api.github.com/users/noise-field/orgs", "repos_url": "https://api.github.com/users/noise-field/repos", "events_url": "https://api.github.com/users/noise-field/events{/privacy}", "received_events_url": "https://api.github.com/users/noise-field/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks!", "Sorry again we let this sit for so long!\r\n\r\nSo since it's been a long time, the diff has gotten quite messy. Would you mind closing and re-opening a clean PR @noise-field ? Ping me on it and we'll expedite the review. Sorry again.", "Closing as requested" ]
1,606
1,612
1,612
CONTRIBUTOR
null
# What does this PR do? This PR fixes the issue #8849 where MLflow logging failed due to parameters logged being too long. Now the MLflow logger also fetches the limits directly from MLflow validation utility. <!-- Remove if not applicable --> Fixes #8849 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8875/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8875", "html_url": "https://github.com/huggingface/transformers/pull/8875", "diff_url": "https://github.com/huggingface/transformers/pull/8875.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8875.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8874
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8874/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8874/comments
https://api.github.com/repos/huggingface/transformers/issues/8874/events
https://github.com/huggingface/transformers/issues/8874
754,489,078
MDU6SXNzdWU3NTQ0ODkwNzg=
8,874
Results are different when fine-tuning continues after loading model from checkpoint
{ "login": "schwabmi", "id": 52445177, "node_id": "MDQ6VXNlcjUyNDQ1MTc3", "avatar_url": "https://avatars.githubusercontent.com/u/52445177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/schwabmi", "html_url": "https://github.com/schwabmi", "followers_url": "https://api.github.com/users/schwabmi/followers", "following_url": "https://api.github.com/users/schwabmi/following{/other_user}", "gists_url": "https://api.github.com/users/schwabmi/gists{/gist_id}", "starred_url": "https://api.github.com/users/schwabmi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/schwabmi/subscriptions", "organizations_url": "https://api.github.com/users/schwabmi/orgs", "repos_url": "https://api.github.com/users/schwabmi/repos", "events_url": "https://api.github.com/users/schwabmi/events{/privacy}", "received_events_url": "https://api.github.com/users/schwabmi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi there. The results are slightly different because your dataloader has some randomization (the train dataloader has `shuffle=True` ) and the `Trainer` currently does not go through your dataloader for the past epochs when resuming training. So it trains starting from the global step 282 with the data of the epoch 0 of the initial training (hope that makes sense).\r\n\r\nLet me see if we can support full reproducibility without a big drop in performance (cause we don't want to loop through that epoch 0 without doing anything either).", "Hej\r\nthanks for the fast answer!\r\nI tried it out, but the results still differ slightly (leaving out: `--ignore_data_skip`).\r\nYour change in the code should make the results exactly the same when continue training, right?\r\nIs it because of the training data sampler (`RandomSampler`)?\r\n\r\nCheers\r\n", "> Your change in the code should make the results exactly the same when continue training, right?\r\n\r\nYes, and this is enforced by tests in the CI. If your results still differ slightly, there might be another source of randomness not properly seeded that is responsible for those changes." ]
1,606
1,606
1,606
NONE
null
## Environment info - `transformers` version: 4.0.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: yes (device: cuda:0, n_gpu: 1) - Using distributed or parallel set-up in script?: False ### Who can help @sgugger @stefan-it ## Information Model I am using (Bert, XLNet ...): bert-base-cased The problem arises when using: * [x] the official example scripts: run_ner_old.py The tasks I am working on is: * [x] my own task or dataset: token classification for a rhetoric device ## To reproduce Steps to reproduce the behavior: 1. Run run_ner_old script and save model after one epoch (282 steps): ``` python3 ./run_ner_old.py \ --data_dir ./data/ \ --labels ./data/labels.txt \ --model_name_or_path bert-base-cased \ --output_dir ./output/ \ --max_seq_length 128 \ --num_train_epochs 2 \ --per_device_train_batch_size 16 \ --save_steps 282 \ --seed 1 \ --do_train \ --do_eval ``` 2. Run ner_old_script from checkpoint-282: ``` python3 ./run_ner_old.py \ --data_dir ./data/ \ --labels ./data/labels.txt \ --model_name_or_path ./output/checkpoint-282 \ --tokenizer bert-base-cased \ --output_dir ./output2/ \ --max_seq_length 128 \ --num_train_epochs 2 \ --per_device_train_batch_size 16 \ --save_steps 282 \ --seed 1 \ --do_train \ --do_eval ``` 3. Compare evaluation results **First experiment:** Run the script `run_ner_old.py` as showed above to fine-tune BERT. I saved the model after the first epoch (282 steps). **Second experiment:** Run the script `run_ner_old.py` as showed above to fine-tune BERT, starting from checkpoint-282 from the first experiment: ``` [INFO|trainer.py:662] 2020-12-01 14:35:09,848 >> ***** Running training ***** [INFO|trainer.py:663] 2020-12-01 14:35:09,848 >> Num examples = 4501 [INFO|trainer.py:664] 2020-12-01 14:35:09,848 >> Num Epochs = 2 [INFO|trainer.py:665] 2020-12-01 14:35:09,849 >> Instantaneous batch size per device = 16 [INFO|trainer.py:666] 2020-12-01 14:35:09,849 >> Total train batch size (w. parallel, distributed & accumulation) = 16 [INFO|trainer.py:667] 2020-12-01 14:35:09,849 >> Gradient Accumulation steps = 1 [INFO|trainer.py:668] 2020-12-01 14:35:09,849 >> Total optimization steps = 564 [INFO|trainer.py:681] 2020-12-01 14:35:09,851 >> Continuing training from checkpoint, will skip to saved global_step [INFO|trainer.py:682] 2020-12-01 14:35:09,851 >> Continuing training from epoch 1 [INFO|trainer.py:683] 2020-12-01 14:35:09,851 >> Continuing training from global step 282 [INFO|trainer.py:684] 2020-12-01 14:35:09,851 >> Will skip the first 0 batches in the first epoch ``` This seems right as the training continues from step 282 and it trains one complete epoch ("skip the first 0 batches"). But when I **compare the results**, they are slightly different: 1. experiment: eval_f1 = 0.9226747985188413 2. experiment: eval_f1 = 0.9211328976034858 Also the loss after 500 steps is already different: 1. experiment: `{'loss': 0.09096851348876953, 'learning_rate': 5.673758865248227e-06, 'epoch': 1.773049645390071} ` 2. experiment: ` {'loss': 0.010856814384460449, 'learning_rate': 5.673758865248227e-06, 'epoch': 1.773049645390071} ` ## Expected behavior I would have expected that both trained models should produce the same results since the second experiment does exactly the same but in two steps. (The model is saved and loaded between the two epochs). The *checkpoint-282* directory consists of the following files: ``` config.json optimizer.pt pytorch_model.bin scheduler.pt trainer_state.json training_args.bin vocab.txt ``` It does not seem that there is any random initialization since I added the seed and the results do not change when running again. Did I forget to save or load anything? Cheers
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8874/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8873
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8873/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8873/comments
https://api.github.com/repos/huggingface/transformers/issues/8873/events
https://github.com/huggingface/transformers/issues/8873
754,441,338
MDU6SXNzdWU3NTQ0NDEzMzg=
8,873
How to pass the attention mask as a param to model forward when using torchscript?
{ "login": "JiayiFu", "id": 8230560, "node_id": "MDQ6VXNlcjgyMzA1NjA=", "avatar_url": "https://avatars.githubusercontent.com/u/8230560?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JiayiFu", "html_url": "https://github.com/JiayiFu", "followers_url": "https://api.github.com/users/JiayiFu/followers", "following_url": "https://api.github.com/users/JiayiFu/following{/other_user}", "gists_url": "https://api.github.com/users/JiayiFu/gists{/gist_id}", "starred_url": "https://api.github.com/users/JiayiFu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JiayiFu/subscriptions", "organizations_url": "https://api.github.com/users/JiayiFu/orgs", "repos_url": "https://api.github.com/users/JiayiFu/repos", "events_url": "https://api.github.com/users/JiayiFu/events{/privacy}", "received_events_url": "https://api.github.com/users/JiayiFu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think you would need to compile your model with both the tokens tensor and the attention mask. Given that the attention mask is the second argument, you can pass it directly when tracing the model:\r\n\r\n```py\r\ntraced_model = torch.jit.trace(model, [tokens_tensor, attention_mask])\r\n```\r\n\r\nthen you can do:\r\n```py\r\nmodel(tokens_tensor, attention_mask)\r\n```", "@LysandreJik It works. Thanks for your help! " ]
1,606
1,606
1,606
NONE
null
## Environment info - `transformers` version: - Platform: Ubuntu 16.04 - Python version: 3.6.9 - PyTorch version (GPU): 1.3.0+cu100 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ## Information I am using a Bert model downloaded from hugging face. I finetuned that model for a two-class classification task and convert that model to a `torchscript` through the `jit.trace`. The following code show how I got the torchscript: from transformers import BertTokenizer, BertForSequenceClassification tokenizer = BertTokenizer.from_pretrained(tokenizer_dir) model = BertForSequenceClassification.from_pretrained(model_dir, num_labels=2, torchscript=True) model.eval() model = model.to("cuda:0") input_text = ["test this case", "test test this case"] encoding = tokenizer(text_batch, return_tensors='pt', padding=True, truncation=False) input_ids = encoding['input_ids'] attention_mask = encoding['attention_mask'] tokens_tensor = torch.tensor(input_ids) tokens_tensor = tokens_tensor.to("cuda:0") traced_model = torch.jit.trace(model, tokens_tensor) torch.jit.save(traced_model, str(pt_path)) The following code show how I use the torchscript, the input tensor is the same with the first code: pt_model = torch.jit.load(model_path)) pt_model.eval() pt_label = pt_model(input_tensor)[0] For the normal model, it needs to pass two params input tensor and attention mask like: model(input, attention_mask=attn_mask) But for the `torchscript`, I can't pass the attention mask to the model. So, what's the right way to use `torchscript` to do the forward with the attention mask? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8873/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8872
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8872/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8872/comments
https://api.github.com/repos/huggingface/transformers/issues/8872/events
https://github.com/huggingface/transformers/issues/8872
754,397,934
MDU6SXNzdWU3NTQzOTc5MzQ=
8,872
Deberta Tokenizatiion
{ "login": "yaysummeriscoming", "id": 11413145, "node_id": "MDQ6VXNlcjExNDEzMTQ1", "avatar_url": "https://avatars.githubusercontent.com/u/11413145?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yaysummeriscoming", "html_url": "https://github.com/yaysummeriscoming", "followers_url": "https://api.github.com/users/yaysummeriscoming/followers", "following_url": "https://api.github.com/users/yaysummeriscoming/following{/other_user}", "gists_url": "https://api.github.com/users/yaysummeriscoming/gists{/gist_id}", "starred_url": "https://api.github.com/users/yaysummeriscoming/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yaysummeriscoming/subscriptions", "organizations_url": "https://api.github.com/users/yaysummeriscoming/orgs", "repos_url": "https://api.github.com/users/yaysummeriscoming/repos", "events_url": "https://api.github.com/users/yaysummeriscoming/events{/privacy}", "received_events_url": "https://api.github.com/users/yaysummeriscoming/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "@LysandreJik any update on this?", "@yaysummeriscoming To get sub words instead of numbers, you can call `tokenizer.gpt2_tokenizer.decode(tokens)`. Please take a look at [our code](https://github.com/huggingface/transformers/blob/52c9e842854a701a7d1b608600a614278b4407d3/src/transformers/tokenization_deberta.py#L396) for reference.", "That did the trick, thanks!" ]
1,606
1,608
1,608
NONE
null
## Environment info - `transformers` version: 4.0.0 - Platform: Linux - Python version: 3.8 - PyTorch version (GPU?): 1.7 ### Who can help @BigBird01 @LysandreJik ## Information I'd like to use the new deberta model, but it seems that the tokens aren't mapped correctly? ``` from transformers import AutoTokenizer test_string = 'hello, I am a dog' tokenizer = AutoTokenizer.from_pretrained('roberta-base') print('Roberta output is: ', tokenizer.tokenize(test_string)) tokenizer = AutoTokenizer.from_pretrained('microsoft/deberta-base') print('Deberta output is: ', tokenizer.tokenize(test_string)) ``` Roberta output is: ['hello', ',', 'ĠI', 'Ġam', 'Ġa', 'Ġdog'] Deberta output is: ['31373', '11', '314', '716', '257', '3290'] I'd expect deberta to give an output similar to roberta, rather than numbers.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8872/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8871
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8871/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8871/comments
https://api.github.com/repos/huggingface/transformers/issues/8871/events
https://github.com/huggingface/transformers/issues/8871
754,363,857
MDU6SXNzdWU3NTQzNjM4NTc=
8,871
Decrease Longformer window size / computational cost
{ "login": "iliaschalkidis", "id": 1626984, "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iliaschalkidis", "html_url": "https://github.com/iliaschalkidis", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @iliaschalkidis,\r\n\r\nthanks for your issue! The memory usage in Longformer does not decrease linearly when reducing the attention_window...but I'm a bit surprised that you are experiencing OOM in your set-up...Does the same happen for your in eager mode? I'll try to look into it a bit next week. One thing that would be of great help is if you find time to benchmark the memory usage of `TFLongformer`, for:\r\n\r\n- eager mode\r\n- compiled\r\n\r\nfor different settings of the window size", "Hi @patrickvonplaten, how do you recommend to perform this benchmarking? Any suggestion (best practice/ tool)? I also had the impression that TF2 is in eager execution by default...", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
NONE
null
Hi there, I would like to use Longformer instead of BERT or ROBERTA for longer documents, e.g., 1024 subword units. My goal is to fit a batch of equal size in the same GPU card for all models. In my understanding, this cannot happen with the default configuration, which used windows of 512 subword-units for local attention. In other words, this is by default more computationally expensive than running BERT or ROBERTA. So I thought the solution would be to decrease the size of window proportionally to the increase of input sequence size. This will lead to equal or less computations. I run the following experiments in a single RTX 2080Ti: **Train ROBERTA with `batch_size=6` and `max_len=512` (SUCCESS)** ```python from transformers import TFLongformerModel, LongformerConfig, TFRobertaModel import tensorflow as tf import numpy as np import logging logging.getLogger("tensorflow").setLevel(logging.ERROR) logging.getLogger("transformers").setLevel(logging.ERROR) class Classifier(tf.keras.Model): def __init__(self, bert_encoder, *args, **kwargs): super(Classifier, self).__init__(*args, **kwargs) self.classifier = tf.keras.layers.Dense(2) self.bert_encoder = bert_encoder def call(self, inputs): bert_encodings = self.bert_encoder(inputs) return self.classifier(tf.squeeze(bert_encodings[0][:, 0:1, :], axis=1)) # Train ROBERTA for 512 TOKENS roberta = TFRobertaModel.from_pretrained('roberta-base') roberta_classifier = Classifier(bert_encoder=roberta) dummy_inputs = np.zeros((6, 512), dtype=np.int32) dummy_outputs = np.zeros((6, 2), dtype=np.int32) roberta_classifier.compile(optimizer='adam', loss='categorical_crossentropy') roberta_classifier.fit(dummy_inputs, dummy_outputs, batch_size=8) print('Roberta (512) trained successfully!') ``` **Train LONGFORMER with `batch_size=6` and `max_len=512` and `attention_window=512` (SUCCESS)** ```python # Train LONG-FORMER for 512 TOKENS config = LongformerConfig.from_pretrained('allenai/longformer-base-4096') config.attention_window = [512] * 12 longformer = TFLongformerModel(config) longformer_classifier = Classifier(bert_encoder=longformer) dummy_inputs = np.zeros((6, 512), dtype=np.int32) dummy_outputs = np.zeros((6, 2), dtype=np.int32) longformer_classifier.compile(optimizer='adam', loss='categorical_crossentropy') longformer_classifier.fit(dummy_inputs, dummy_outputs, batch_size=8) print('Longformer (512) trained successfully!') ``` **Train LONGFORMER with `batch_size=6` and `max_len=1024` and `attention_window=128` (FAILED-OOM)** ```python # Train LONG-FORMER for 1024 TOKENS config = LongformerConfig.from_pretrained('allenai/longformer-base-4096') config.attention_window = [128] * 12 longformer = TFLongformerModel(config) longformer_classifier = Classifier(bert_encoder=longformer) dummy_inputs = np.zeros((6, 1024), dtype=np.int32) dummy_outputs = np.zeros((6, 2), dtype=np.int32) longformer_classifier.compile(optimizer='adam', loss='categorical_crossentropy') longformer_classifier.fit(dummy_inputs, dummy_outputs, batch_size=8) print('Longformer (1024) trained successfully!') ``` The last script is failing with an OOM issue. The same happens for `attention_window in [32,64]` @patrickvonplaten do I miss something? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8871/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8870
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8870/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8870/comments
https://api.github.com/repos/huggingface/transformers/issues/8870/events
https://github.com/huggingface/transformers/issues/8870
754,361,611
MDU6SXNzdWU3NTQzNjE2MTE=
8,870
Token classification example only returns labels as -100 for longformer
{ "login": "JakeCowton", "id": 4622202, "node_id": "MDQ6VXNlcjQ2MjIyMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/4622202?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JakeCowton", "html_url": "https://github.com/JakeCowton", "followers_url": "https://api.github.com/users/JakeCowton/followers", "following_url": "https://api.github.com/users/JakeCowton/following{/other_user}", "gists_url": "https://api.github.com/users/JakeCowton/gists{/gist_id}", "starred_url": "https://api.github.com/users/JakeCowton/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JakeCowton/subscriptions", "organizations_url": "https://api.github.com/users/JakeCowton/orgs", "repos_url": "https://api.github.com/users/JakeCowton/repos", "events_url": "https://api.github.com/users/JakeCowton/events{/privacy}", "received_events_url": "https://api.github.com/users/JakeCowton/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.0.0 - Platform: Linux-5.4.0-1029-aws-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?:4 x V100 (16GB) - Using distributed or parallel set-up in script?: parallel Also: tokenizers==0.9.4 datasets==1.1.3 I should note I had the same issue in transformers 3.5.1 too ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> tokenizers: @mfuntowicz Longformer/Reformer: @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): Longformer The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) I have made very modifications to the token-classification example ([see here](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py)) to allow me to use my own custom dataset for NER. I have 3 labels O, B-Org, I-ORG. When processing my inputs with models like `bert-base-cased`, everything runs smoothly, however, when I make the switch to the `allenai/longformer-base-4096` model, the `tokenize_and_align_labels()` that runs via `datasets.map()` only returns labels of -100 for every token. ## To reproduce Patch file for original `run_ner.py` ``` 20d19 < 44d42 < 199d196 < 206,209c203,204 < text_column_name = "tokens" if "tokens" in column_names else column_names[0] < label_column_name = ( < f"{data_args.task_name}_tags" if f"{data_args.task_name}_tags" in column_names else column_names[1] < ) --- > text_column_name = "words" > label_column_name = "ner" 213,217c208,213 < def get_label_list(labels): < unique_labels = set() < for label in labels: < unique_labels = unique_labels | set(label) < label_list = list(unique_labels) --- > def get_label_list(label_lists): > label_list = list(set( > [label > for label_list in label_lists > for label in label_list] > )) 227a224 > id_to_label = {i: l for i, l in enumerate(label_list)} 239a237,238 > id2label=id_to_label, > label2id=label_to_id, 244a244 > add_prefix_space=True, ``` minimal `train.json` data ```json { "id": 169, "words": [ "My", "favourite", "thing", "about", "the", "market", "was", "Manteigaria", "which", "sells", "the", "best", "pasteis", "de", "Nata", "in", "the", "city", "in", "my", "opinion", "." ], "ner": [ "O", "O", "O", "O", "O", "O", "O", "B-Organisation", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O" ] } ``` Example `train-config.json` ```json { "train_file": "./train.json", "model_name_or_path": "allenai/longformer-base-4096", "output_dir": "./output", "max_seq_length": 4096, "num_train_epochs": 3, "pad_to_max_length": false, "per_device_train_batch_size": 1, "per_device_eval_batch_size": 1, "save_steps": 250, "eval_steps": 250, "seed": 1, "do_train": true, "do_eval": false, "do_predict": false, "fp16": true, "evaluation_strategy": "steps", "save_total_limit": 1, } ``` Steps to reproduce the behavior: 1. Apply the patch to the `run_ner.py` file found in the token_classification example on the master branch 2. run `python run_ner.py train-config.json` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> If you print `tokenized_inputs["labels"]` produced by `tokenize_and_align_labels()` you'll see that it is a list of `-100` when using `allenai/longformer-base-4096`. However, if you change this to `bert-base-cased`, it will produce the labels `[[-100, 1, 1, 1, 1, 1, 1, 1, 0, -100, -100, -100, 1, 1, 1, 1, 1, -100, 1, 1, -100, 1, 1, 1, 1, 1, 1, 1, -100]]`, which is correct as `1` is `O` and `0` is `B-Organisation`. (There won't be an `I-Organisation` as this minimal reproducible example doesn't have one).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8870/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8869
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8869/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8869/comments
https://api.github.com/repos/huggingface/transformers/issues/8869/events
https://github.com/huggingface/transformers/issues/8869
754,322,396
MDU6SXNzdWU3NTQzMjIzOTY=
8,869
Exporting ALBERT model to onnx increases model size by 7x
{ "login": "unography", "id": 5240449, "node_id": "MDQ6VXNlcjUyNDA0NDk=", "avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4", "gravatar_id": "", "url": "https://api.github.com/users/unography", "html_url": "https://github.com/unography", "followers_url": "https://api.github.com/users/unography/followers", "following_url": "https://api.github.com/users/unography/following{/other_user}", "gists_url": "https://api.github.com/users/unography/gists{/gist_id}", "starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unography/subscriptions", "organizations_url": "https://api.github.com/users/unography/orgs", "repos_url": "https://api.github.com/users/unography/repos", "events_url": "https://api.github.com/users/unography/events{/privacy}", "received_events_url": "https://api.github.com/users/unography/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@mfuntowicz might have an idea. I'm guessing that what it does is copy each layer, while all the layers have shared weights and should point to the same tensor.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,606
1,619
1,619
CONTRIBUTOR
null
I'm trying to export `albert-base-v2` model to onnx using `python -m transformers.convert_graph_to_onnx --framework pt --model albert-base-v2 --quantize albert.onnx --opset 12` The original pytorch model size is around 45 MB (https://huggingface.co/albert-base-v2), but the exported model size is around 340 MB. How do I keep the model's size same?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8869/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8868
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8868/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8868/comments
https://api.github.com/repos/huggingface/transformers/issues/8868/events
https://github.com/huggingface/transformers/pull/8868
754,254,006
MDExOlB1bGxSZXF1ZXN0NTMwMTg1NDcy
8,868
Transfoxl seq classification
{ "login": "spatil6", "id": 6419011, "node_id": "MDQ6VXNlcjY0MTkwMTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6419011?v=4", "gravatar_id": "", "url": "https://api.github.com/users/spatil6", "html_url": "https://github.com/spatil6", "followers_url": "https://api.github.com/users/spatil6/followers", "following_url": "https://api.github.com/users/spatil6/following{/other_user}", "gists_url": "https://api.github.com/users/spatil6/gists{/gist_id}", "starred_url": "https://api.github.com/users/spatil6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/spatil6/subscriptions", "organizations_url": "https://api.github.com/users/spatil6/orgs", "repos_url": "https://api.github.com/users/spatil6/repos", "events_url": "https://api.github.com/users/spatil6/events{/privacy}", "received_events_url": "https://api.github.com/users/spatil6/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
CONTRIBUTOR
null
This PR implements Sequence classification for Transformer XL model TransfoxlForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-1,GPT-2) do. Fixes #7623 (Partially) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8868/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8868", "html_url": "https://github.com/huggingface/transformers/pull/8868", "diff_url": "https://github.com/huggingface/transformers/pull/8868.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8868.patch", "merged_at": 1606921713000 }
https://api.github.com/repos/huggingface/transformers/issues/8867
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8867/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8867/comments
https://api.github.com/repos/huggingface/transformers/issues/8867/events
https://github.com/huggingface/transformers/issues/8867
754,246,229
MDU6SXNzdWU3NTQyNDYyMjk=
8,867
length_penalty not influencing results (Bart, Pegasus)
{ "login": "marcoabrate", "id": 43387597, "node_id": "MDQ6VXNlcjQzMzg3NTk3", "avatar_url": "https://avatars.githubusercontent.com/u/43387597?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marcoabrate", "html_url": "https://github.com/marcoabrate", "followers_url": "https://api.github.com/users/marcoabrate/followers", "following_url": "https://api.github.com/users/marcoabrate/following{/other_user}", "gists_url": "https://api.github.com/users/marcoabrate/gists{/gist_id}", "starred_url": "https://api.github.com/users/marcoabrate/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marcoabrate/subscriptions", "organizations_url": "https://api.github.com/users/marcoabrate/orgs", "repos_url": "https://api.github.com/users/marcoabrate/repos", "events_url": "https://api.github.com/users/marcoabrate/events{/privacy}", "received_events_url": "https://api.github.com/users/marcoabrate/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!" ]
1,606
1,606
1,606
NONE
null
Hello, I am experimenting with the generative parameters of the two models Bart and Pegasus. In particular, I am having trouble with the `length_penalty` parameter, since changing it does not change the output of the model. I am summarizing two different chapters of a book (# tokens around 1k) and this is the code I am using: ``` model.generate( b0ch1sec1_text_enc, min_length = 150, max_length = 350, num_beams = 2, length_penalty = lp, early_stopping = True)[0] ``` With `lp` swiping from 0.1 to 2 and model being either `bart-large-cnn` or `pegasus-large`. Do you have any idea why the output does not change at all?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8867/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8866
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8866/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8866/comments
https://api.github.com/repos/huggingface/transformers/issues/8866/events
https://github.com/huggingface/transformers/issues/8866
754,230,561
MDU6SXNzdWU3NTQyMzA1NjE=
8,866
different embedding weights for base-uncased with different transformers versions
{ "login": "aleksandra-sp", "id": 69455636, "node_id": "MDQ6VXNlcjY5NDU1NjM2", "avatar_url": "https://avatars.githubusercontent.com/u/69455636?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aleksandra-sp", "html_url": "https://github.com/aleksandra-sp", "followers_url": "https://api.github.com/users/aleksandra-sp/followers", "following_url": "https://api.github.com/users/aleksandra-sp/following{/other_user}", "gists_url": "https://api.github.com/users/aleksandra-sp/gists{/gist_id}", "starred_url": "https://api.github.com/users/aleksandra-sp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aleksandra-sp/subscriptions", "organizations_url": "https://api.github.com/users/aleksandra-sp/orgs", "repos_url": "https://api.github.com/users/aleksandra-sp/repos", "events_url": "https://api.github.com/users/aleksandra-sp/events{/privacy}", "received_events_url": "https://api.github.com/users/aleksandra-sp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Facing the same issue. A reply on this is highly appreciated.", "can [this](https://github.com/huggingface/transformers/issues/8524#issuecomment-753876838) be your solution? Hope it helps...", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "I had the same issue for `GPT2LMHeadModel`. In my case, I found the solution. Let `hf3_model_dir` be the Hugging Face v3.x model directory that you give to `load_pretrained`. Inside this directory is the saved pytorch model file called `pytorch_model.bin`. Let's load this file directly using pytorch:\r\n\r\n```\r\nstate_dict = torch.load('pytorch_model.bin')`\r\n```\r\n\r\nNow check the values of these two entries:\r\n\r\n```\r\nstate_dict['transformer.wte.weight']\r\nstate_dict['lm_head.weight']\r\n```\r\n\r\nI found that they were different. However, they should be the same `vocab_size x embedding_size` matrix. Indeed, let's actually load the model: \r\n\r\n```\r\nmodel = transformers.GPT2LMHeadModel.from_pretrained(hf3_model)`\r\n```\r\nAnd check the following values:\r\n```\r\nmodel.transformer.wte.weight\r\nmodel.lm_head.weight\r\n```\r\nYou will find that they are the same. However,\r\nin Hugging Face v3.x, they are both equal to `state_dict['lm_head.weight']`\r\nin Hugging Face v4.x, they are both equal to `state_dict['transformer.wte.weight']`.\r\n\r\nSo that's the cause of the problem. To get the same behavior in Hugging Face v4.x as you get in Hugging Face v3.x, I manually set both equal to `state_dict['lm_head.weight']`.", "As a further comment, for models saved under Hugging Face v4.x, `state_dict['transformer.wte.weight']` and `state_dict['lm_head.weight']` are both equal as they should be.\r\n\r\nFor models saved under Hugging Face v3.x, `state_dict['transformer.wte.weight']` ends up being (I believe) just random garbage that is harmless if reloaded using Hugging Face v3.x but can be very harmful if reloaded using Hugging Face v4.x" ]
1,606
1,639
1,619
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.0.0, 3.4.0 and 2.9.0 - Platform: - Python version: 3.7.0 - PyTorch version: 1.4.0 - Tensorflow version: 2.2.0 ## Information Model I am using: Bert The problem arises when using my own scripts. I trained a LayoutLM model by using the original Unilm repo (https://github.com/microsoft/unilm/tree/master/layoutlm) and obtained pretty good results (± 0.9 f1 score). When the Huggingface implementation came out, I retrained the model with the same dataset, parameters and seed and got rubbish results (less then 0.2 F1 score). After investigating, I found that the weights of the embeddings of the pretrained model, loaded at the beginning of training are different for different transformers versions. The weights are also different for the final trained model: a model trained with the original implementation gives different predict results for the same data when predicting using the Huggingface implementation, due to the weights being different after loading. ## To reproduce Steps to reproduce the behavior: Huggingface code: ``` from transformers import LayoutLMConfig, LayoutLMForTokenClassification pretrained_model_path = "models/base-uncased" config = LayoutLMConfig.from_pretrained(pretrained_model_path, num_labels=len(25)) model = LayoutLMForTokenClassification.from_pretrained( pretrained_model_path, from_tf=bool(".ckpt" in pretrained_model_path), config=config ) print(model.base_model._modules["embeddings"]._modules["word_embeddings"].weight) """transformers 4.0.0: Parameter containing: tensor([[-0.0211, -0.0056, 0.0198, ..., 0.0119, 0.0074, -0.0048], [-0.0268, 0.0006, 0.0310, ..., -0.0195, -0.0534, 0.0284], [ 0.0234, 0.0026, -0.0024, ..., -0.0074, -0.0015, -0.0212], ..., [-0.0274, -0.0074, 0.0161, ..., -0.0256, 0.0189, -0.0328], [-0.0350, -0.0304, 0.0087, ..., -0.0349, -0.0086, 0.0229], [-0.0068, -0.0077, -0.0084, ..., -0.0181, -0.0111, 0.0385]], requires_grad=True) """ """transformers 3.4.0: Parameter containing: tensor([[ 0.0298, -0.0229, -0.0033, ..., 0.0097, -0.0179, -0.0065], [-0.0098, 0.0150, -0.0283, ..., -0.0424, -0.0031, -0.0135], [ 0.0122, 0.0038, -0.0066, ..., -0.0261, 0.0167, 0.0176], ..., [ 0.0037, 0.0001, 0.0096, ..., -0.0037, -0.0018, 0.0067], [ 0.0274, 0.0076, 0.0065, ..., 0.0084, -0.0230, -0.0011], [-0.0155, -0.0155, -0.0028, ..., -0.0140, 0.0084, -0.0016]], requires_grad=True) """ ``` With original Layoutlm implementation, transformers 2.9.0: ``` from unilm.layoutlm.layoutlm import LayoutlmConfig, LayoutlmForTokenClassification pretrained_model_path = "models/base-uncased" config = LayoutlmConfig.from_pretrained( pretrained_model_path, num_labels=len(25), ) model = LayoutlmForTokenClassification.from_pretrained( pretrained_model_path, from_tf=bool(".ckpt" in pretrained_model_path), config=config, ) print(model.base_model._modules["embeddings"]._modules["word_embeddings"].weight) """ Parameter containing: tensor([[-0.0111, -0.0777, 0.0293, ..., -0.0323, -0.0190, 0.0403], [-0.0579, -0.0331, -0.0399, ..., -0.0248, -0.0278, -0.0398], [-0.0261, -0.0383, -0.0225, ..., 0.0011, -0.0803, -0.0019], ..., [-0.0186, -0.0593, -0.0167, ..., -0.0243, -0.0096, 0.0050], [-0.0555, -0.0274, 0.0049, ..., -0.0206, -0.0172, -0.0241], [-0.0328, -0.0788, -0.0211, ..., -0.0187, -0.0497, 0.0444]], requires_grad=True) """ ``` ## Expected behavior Get the same weights regardless the transformers version used.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8866/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/8866/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8865
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8865/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8865/comments
https://api.github.com/repos/huggingface/transformers/issues/8865/events
https://github.com/huggingface/transformers/issues/8865
754,213,092
MDU6SXNzdWU3NTQyMTMwOTI=
8,865
can the BertModel convert to onnx? whether any one had done sucessfully ?
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "2 resources in the thread linked by valhalla: https://discuss.huggingface.co/t/how-to-apply-pruning-on-a-bert-model/1658/5", "> 2 resources in the thread linked by valhalla: https://discuss.huggingface.co/t/how-to-apply-pruning-on-a-bert-model/1658/5\r\n\r\nThank you very much for your warm reply, I will study it." ]
1,606
1,606
1,606
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8865/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8864
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8864/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8864/comments
https://api.github.com/repos/huggingface/transformers/issues/8864/events
https://github.com/huggingface/transformers/issues/8864
753,939,081
MDU6SXNzdWU3NTM5MzkwODE=
8,864
AttributeError: 'NoneType' object has no attribute 'from_pretrained'
{ "login": "louisabraham", "id": 13174805, "node_id": "MDQ6VXNlcjEzMTc0ODA1", "avatar_url": "https://avatars.githubusercontent.com/u/13174805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/louisabraham", "html_url": "https://github.com/louisabraham", "followers_url": "https://api.github.com/users/louisabraham/followers", "following_url": "https://api.github.com/users/louisabraham/following{/other_user}", "gists_url": "https://api.github.com/users/louisabraham/gists{/gist_id}", "starred_url": "https://api.github.com/users/louisabraham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/louisabraham/subscriptions", "organizations_url": "https://api.github.com/users/louisabraham/orgs", "repos_url": "https://api.github.com/users/louisabraham/repos", "events_url": "https://api.github.com/users/louisabraham/events{/privacy}", "received_events_url": "https://api.github.com/users/louisabraham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Same here a couple of hours ago", "1. Hi, could you please provide the information related to your environment? \r\n\r\n2. When you say it was working yesterday but was working before, do you mean to say you've upgraded to version v4.0.0 released yesterday? If this is so, you may be obtaining the following error message: `AttributeError: 'NoneType' object has no attribute 'from_pretrained'`. This would be because you do not have `sentencepiece` installed.\r\n\r\n3. Are you sure this worked previously? This should never have worked, as `AutoTokenizer` cannot be initialized like this, but has to be instantiated from the `from_pretrained` method:\r\n\r\n```py\r\nfrom transformers import AutoTokenizer\r\nAutoTokenizer.from_pretrained(\"Helsinki-NLP/opus-mt-en-fr\")\r\n```\r\n\r\nwhich works on v4.0.0 and on `master`, as long as you have SentencePiece installed.", "Putting a better error message in #8881.", "Right, I was using\r\n```py\r\nAutoTokenizer.from_pretrained(\"Helsinki-NLP/opus-mt-en-fr\")\r\n```\r\n\r\nThanks, `pip install sentencepiece` fixed the issue!\r\n\r\nIt looks that previously the tokenizer outputted torch tensors and now lists. Is this intended? It breaks existing code.", "Yes, this was a bug. Tokenizers are framework-agnostic and should not output a specific framework's tensor. The implementation of the Marian tokenizer was not respecting the API in that regard.\r\n\r\nTokenizers can still handle torch tensors, you need to specify that you want them though:\r\n\r\n```py\r\ntokenizer(xxx, return_tensors=\"pt\")\r\n```\r\n\r\nI guess in your situation it has to do with the `prepare_seq2seq_batch`:\r\n\r\n```py\r\ntokenizer.prepare_seq2seq_batch(xxx, return_tensors=\"pt\")\r\n```", "Thanks!" ]
1,606
1,606
1,606
NONE
null
This code was working yesterday but doesn't work today: ```py from transformers import AutoTokenizer AutoTokenizer("Helsinki-NLP/opus-mt-en-fr") ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8864/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 3, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8864/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8863
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8863/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8863/comments
https://api.github.com/repos/huggingface/transformers/issues/8863/events
https://github.com/huggingface/transformers/issues/8863
753,867,035
MDU6SXNzdWU3NTM4NjcwMzU=
8,863
Unwanted left shift of target tokens in `get_nll`
{ "login": "JamesDeAntonis", "id": 33379057, "node_id": "MDQ6VXNlcjMzMzc5MDU3", "avatar_url": "https://avatars.githubusercontent.com/u/33379057?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JamesDeAntonis", "html_url": "https://github.com/JamesDeAntonis", "followers_url": "https://api.github.com/users/JamesDeAntonis/followers", "following_url": "https://api.github.com/users/JamesDeAntonis/following{/other_user}", "gists_url": "https://api.github.com/users/JamesDeAntonis/gists{/gist_id}", "starred_url": "https://api.github.com/users/JamesDeAntonis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JamesDeAntonis/subscriptions", "organizations_url": "https://api.github.com/users/JamesDeAntonis/orgs", "repos_url": "https://api.github.com/users/JamesDeAntonis/repos", "events_url": "https://api.github.com/users/JamesDeAntonis/events{/privacy}", "received_events_url": "https://api.github.com/users/JamesDeAntonis/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "@patrickvonplaten or @lhoestq might have an idea.", "Hey @JamesDeAntonis yes this is expected. The same behavior would occur for GPT2 if only one token is provided as the labels. You should at least add an EOS token at the end to `labels` (so that you have two labels tokens) to make sure the loss is not zero.\r\n\r\nIf you cannot do this you will have to fork the repo and manually change the `get_nll()` function.", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.4 - Platform: ubuntu - Python version: 3.8 - PyTorch version (GPU?): 1.6 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?:no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. RAG: @patrickvonplaten, @lhoestq --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [x ] my own modified scripts: (give details below) I am training RAG on the FEVER dataset, trying to generate one token from `['[SUPPORTS]', '[REFUTES]', '[INCONCLUSIVE]'. My loss function is always zero, I think because of the shift left that occurs in `RagTokenForGeneration.get_nll()`, which I think should only happen if special tokens are included. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Training RAG on FEVER dataset ## To reproduce Steps to reproduce the behavior: 1. Simply train `RAGTokenForGeneration` on anything, using only one token with no special tokens. Loss is zero <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8863/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8862
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8862/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8862/comments
https://api.github.com/repos/huggingface/transformers/issues/8862/events
https://github.com/huggingface/transformers/issues/8862
753,854,438
MDU6SXNzdWU3NTM4NTQ0Mzg=
8,862
TypeError: forward() got an unexpected keyword argument 'past'
{ "login": "BigSalmon2", "id": 61605789, "node_id": "MDQ6VXNlcjYxNjA1Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/61605789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BigSalmon2", "html_url": "https://github.com/BigSalmon2", "followers_url": "https://api.github.com/users/BigSalmon2/followers", "following_url": "https://api.github.com/users/BigSalmon2/following{/other_user}", "gists_url": "https://api.github.com/users/BigSalmon2/gists{/gist_id}", "starred_url": "https://api.github.com/users/BigSalmon2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BigSalmon2/subscriptions", "organizations_url": "https://api.github.com/users/BigSalmon2/orgs", "repos_url": "https://api.github.com/users/BigSalmon2/repos", "events_url": "https://api.github.com/users/BigSalmon2/events{/privacy}", "received_events_url": "https://api.github.com/users/BigSalmon2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, it seems you've upgraded your library from a 3.x version to 4.0.0. I invite you to consult the migration guide [here (deprecated attributes)](https://huggingface.co/transformers/migration.html#removed-some-deprecated-attributes) or to pin your `transformers` on version 3: `transformers==3`.", "Thank you!" ]
1,606
1,606
1,606
NONE
null
TypeError: forward() got an unexpected keyword argument 'past' ``` text1 = request.form['rawtext'] m = text1 text = tokenizer.encode(text1) myinput, past = torch.tensor([text]), None logits, past = model(myinput, past = past) logits = logits[0,-1] probabilities = torch.nn.functional.softmax(logits) best_logits, best_indices = logits.topk(780) best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] text.append(best_indices[0].item()) best_probabilities = probabilities[best_indices].tolist() for i in range(780): f = ('Generated {}: {}'.format(i, best_words[i])) print(f) ``` ``` /content/GLPAPP * Serving Flask app "__main__" (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) * Running on http://19bd405035a7.ngrok.io * Traffic stats available on http://127.0.0.1:4040 127.0.0.1 - - [30/Nov/2020 22:34:16] "GET / HTTP/1.1" 200 - 127.0.0.1 - - [30/Nov/2020 22:34:19] "GET /static/js/jquery.min.js HTTP/1.1" 200 - 127.0.0.1 - - [30/Nov/2020 22:34:19] "GET /static/css/main.css HTTP/1.1" 200 - 127.0.0.1 - - [30/Nov/2020 22:34:19] "GET /static/js/breakpoints.min.js HTTP/1.1" 200 - 127.0.0.1 - - [30/Nov/2020 22:34:19] "GET /static/js/main.js HTTP/1.1" 200 - 127.0.0.1 - - [30/Nov/2020 22:34:19] "GET /static/js/util.js HTTP/1.1" 200 - 127.0.0.1 - - [30/Nov/2020 22:34:19] "GET /static/js/browser.min.js HTTP/1.1" 200 - 127.0.0.1 - - [30/Nov/2020 22:34:19] "GET /static/css/fontawesome-all.min.css HTTP/1.1" 200 - 127.0.0.1 - - [30/Nov/2020 22:34:22] "GET /favicon.ico HTTP/1.1" 404 - [2020-11-30 22:34:23,393] ERROR in app: Exception on /predict [POST] Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python3.6/dist-packages/flask/_compat.py", line 39, in reraise raise value File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1950, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "<ipython-input-8-a5d8492f7c0c>", line 36, in predict logits, past = model(myinput, past = past) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'past' 127.0.0.1 - - [30/Nov/2020 22:34:23] "POST /predict HTTP/1.1" 500 - ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8862/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8861
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8861/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8861/comments
https://api.github.com/repos/huggingface/transformers/issues/8861/events
https://github.com/huggingface/transformers/pull/8861
753,852,672
MDExOlB1bGxSZXF1ZXN0NTI5ODUwODMz
8,861
Add warnings for incompatible generation parameters
{ "login": "jsrozner", "id": 1113285, "node_id": "MDQ6VXNlcjExMTMyODU=", "avatar_url": "https://avatars.githubusercontent.com/u/1113285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jsrozner", "html_url": "https://github.com/jsrozner", "followers_url": "https://api.github.com/users/jsrozner/followers", "following_url": "https://api.github.com/users/jsrozner/following{/other_user}", "gists_url": "https://api.github.com/users/jsrozner/gists{/gist_id}", "starred_url": "https://api.github.com/users/jsrozner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jsrozner/subscriptions", "organizations_url": "https://api.github.com/users/jsrozner/orgs", "repos_url": "https://api.github.com/users/jsrozner/repos", "events_url": "https://api.github.com/users/jsrozner/events{/privacy}", "received_events_url": "https://api.github.com/users/jsrozner/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Question: the `generate()` documentation says that `topk` etc params will default to values in the `pretrainedconfig`, but `top_k`, for example is never read from the config file. This differs from, e.g., `num_beams, max_length` which are read from config if they are not passed in as params to the generate function.\r\n\r\nAnd since, for example, `T5PreTrainedModel` never overwrites the generate function, I don't see how the defaults in the config (for params like `top_k`) could actually end up being passed to the generate function?", "> Question: the `generate()` documentation says that `topk` etc params will default to values in the `pretrainedconfig`, but `top_k`, for example is never read from the config file. This differs from, e.g., `num_beams, max_length` which are read from config if they are not passed in as params to the generate function.\r\n> \r\n> And since, for example, `T5PreTrainedModel` never overwrites the generate function, I don't see how the defaults in the config (for params like `top_k`) could actually end up being passed to the generate function?\r\n\r\nRegarding reading from config - am I missing something or do these never get checked? \r\n\r\n@patrickvonplaten ", "> > Question: the `generate()` documentation says that `topk` etc params will default to values in the `pretrainedconfig`, but `top_k`, for example is never read from the config file. This differs from, e.g., `num_beams, max_length` which are read from config if they are not passed in as params to the generate function.\r\n> > And since, for example, `T5PreTrainedModel` never overwrites the generate function, I don't see how the defaults in the config (for params like `top_k`) could actually end up being passed to the generate function?\r\n> \r\n> Regarding reading from config - am I missing something or do these never get checked?\r\n> \r\n> @patrickvonplaten\r\n\r\nhttps://github.com/huggingface/transformers/blob/693ac3594b96e86dd282fdf8e413f3a48b176892/src/transformers/generation_utils.py#L240", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
CONTRIBUTOR
null
# What does this PR do? While testing various generation configurations, I got confused when beam_sample's outputs (i.e. `num_beams > 1, do_sample=True`) failed to change with different settings of `top_k, top_p, temperature`. Then I realized that in my call to `generate()`, `do_sample` was still set to false even though I was cycling through various settings of top_k, top_p and temperature. Generate (and its helper methods) already contain some parameter compatibility checks. This adds a few more checks: - `num_beams` must be >= 1 - when `do_sample` is not set, warn the user if any of `top_p, top_k, temperature` are not None (since they will have no effect) - if `num_beams` is 1 (no beam search), warn user if any of `early_stopping, length_penalty` are not None (since they will have no effect) - if an invalid set of params `num_beams` and `do_sample` are passed, raise ValueError. Note that since we also add a check for `num_beams < 1`, this final value error will never be raised, but this prevents the `generate` function from falling off the end of the ifelse chain if something is altered in the future. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). No doc changes necessary - [ ] Did you write any new necessary tests? ran tests, but no new functionality created, just warning messages ## Who can review? Please tag fewer than 3 people. Text Generation: @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8861/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8861/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8861", "html_url": "https://github.com/huggingface/transformers/pull/8861", "diff_url": "https://github.com/huggingface/transformers/pull/8861.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8861.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8860
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8860/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8860/comments
https://api.github.com/repos/huggingface/transformers/issues/8860/events
https://github.com/huggingface/transformers/pull/8860
753,838,073
MDExOlB1bGxSZXF1ZXN0NTI5ODM4ODMy
8,860
Prevent BatchEncoding from blindly passing casts down to the tensors it contains
{ "login": "Craigacp", "id": 729696, "node_id": "MDQ6VXNlcjcyOTY5Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/729696?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Craigacp", "html_url": "https://github.com/Craigacp", "followers_url": "https://api.github.com/users/Craigacp/followers", "following_url": "https://api.github.com/users/Craigacp/following{/other_user}", "gists_url": "https://api.github.com/users/Craigacp/gists{/gist_id}", "starred_url": "https://api.github.com/users/Craigacp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Craigacp/subscriptions", "organizations_url": "https://api.github.com/users/Craigacp/orgs", "repos_url": "https://api.github.com/users/Craigacp/repos", "events_url": "https://api.github.com/users/Craigacp/events{/privacy}", "received_events_url": "https://api.github.com/users/Craigacp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "black complained about the style after the update, so I fixed it and squashed the commits again.", "Thank you @Craigacp!" ]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? This PR prevents `BatchEncoding.to` from passing down things which aren't devices to the tensors it contains. Previously it would pass down all the arguments, and as the `to` method in pytorch can also cast the arguments to different types it's used blindly by other packages (e.g. Nvidia's Apex). This caused an issue where when using Apex's AMP support with `O2` or greater it would cast the token indexes from a `LongTensor` to a `HalfTensor` truncating our vocab at 65k and rounding most of the words to the nearest 8th word (if you blindly insert the cast back in in the embedding layer, which the warning says to do). The doc for `BatchEncoding.to` says it is only for moving the encoding and the tensors it contains between devices, but as the type checking isn't on by default it can behave like a regular pytorch `to` method and accept cast arguments that it passes down to the tensors it contains. Fixes #6582 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. #6582 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? There are no docs or tests changes as the change makes the method conform with its currently documented behaviour. @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8860/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8860", "html_url": "https://github.com/huggingface/transformers/pull/8860", "diff_url": "https://github.com/huggingface/transformers/pull/8860.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8860.patch", "merged_at": 1606845713000 }
https://api.github.com/repos/huggingface/transformers/issues/8859
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8859/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8859/comments
https://api.github.com/repos/huggingface/transformers/issues/8859/events
https://github.com/huggingface/transformers/issues/8859
753,820,280
MDU6SXNzdWU3NTM4MjAyODA=
8,859
transformers/trainer.py stops after some iterations for iterative dataloaders.
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "could you tell me please how should be the format of iterative dataloaders for the trainer funtion? I mean in the current implementation, this just goes till the end of length of dataloader and then it terminates, it does not loop again, could you explain how I can use trainer.py with iterative dataloaders please? thanks ", "It's hard to know what's going on without seeing the command/script you are executing. In particular, the `Trainer` logs a lot of info regarding the number of steps/epochs at the beginning of training that could be useful to debug this.", "Hi @sgugger I spent really the whole day long hours continously, cannot see what is going on, this really needs someone of more expertise. this is hard for me to see the reason. To me callbaxks are changing the dataloader but not sure where it is happening. \r\n", "@sgugger I added the codes in https://github.com/rabeehk/debug , here is how to run:\r\n```\r\npip install -r requirements.txt\r\npython setup.py develop\r\ncd seq2seq \r\npython finetune_t5_trainer.py configs/mrpc_adapter_local.json\r\n```\r\nThe results of running the codes is that after epoch 1 the dataloader is no more called resulting in labels not being inside the batch. I made the test case small to run fast, could you have a look, this is my only hope to fix this issue. thank you \r\n\r\n```\r\n### epochs_trained, num_train_epochs 0 4000\r\n#### epoch 0\r\nstep 0 dict_keys(['input_ids', 'attention_mask', 'decoder_input_ids', 'labels'])\r\n### in the loss dict_keys(['input_ids', 'attention_mask', 'decoder_input_ids', 'labels'])\r\n@@@ after\r\n#### epoch 1\r\nstep 0 dict_keys(['input_ids', 'attention_mask', 'decoder_input_ids'])\r\n### in the loss dict_keys(['input_ids', 'attention_mask', 'decoder_input_ids'])\r\nTraceback (most recent call last):\r\n File \"finetune_t5_trainer.py\", line 250, in <module>\r\n main()\r\n File \"finetune_t5_trainer.py\", line 183, in main\r\n model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None\r\n File \"/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/trainers/trainer.py\", line 789, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/trainers/trainer.py\", line 1141, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/trainers/t5_trainer.py\", line 339, in compute_loss\r\n labels = inputs.pop(\"labels\")\r\nKeyError: 'labels'\r\n 0%| \r\n```\r\n\r\n\r\n", "Hi\r\nhere is the solution, the way trainer works is having 1 iterative datasets with max_steps, then the issue was that cycle has caching in memory under the hood, then after first epoch, when modifying inputs, this was using it in the next iter and was craching, fixed with defining cycle as:\r\n\r\n```\r\ndef cycle(iterable):\r\n while True:\r\n for x in iterable:\r\n yield x\r\n```\r\n\r\nand iterating over max_steps in the MultiTaskDataLoader.", "I'm really sorry to bother you. Could you please tell me how to modify it specifically? I have also encountered this problem. " ]
1,606
1,652
1,606
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.1 - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): yes - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: - ### Who can help Trainer: @sgugger Text Generation: @patrickvonplaten @TevenLeScao T5: @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner ## Information Hi I am using dataloader of below, after first epoch it finishes and trainer does not continue with max_steps, could you point me to the issue? I set is_sized_dataset to False. Thank you. ``` class TaskDataLoader: """Wrapper around dataloader to keep the task names.""" def __init__(self, task_name, dataset, batch_size=8, collate_fn=None, drop_last=False, num_workers=0, sampler=None): self.dataset = dataset self.task_name = task_name self.data_loader = DataLoader(self.dataset, batch_size=batch_size, sampler=sampler, collate_fn=collate_fn, drop_last=drop_last, num_workers=num_workers) def __len__(self): return len(self.data_loader) #self.dataset.num_rows def __iter__(self): for batch in self.data_loader: yield batch class MultiTaskDataLoader: """Given a dictionary of task: dataset, returns a multi-task dataloader which uses temperature sampling to sample different datasets.""" def __init__(self, tasks_to_datasets, batch_size=8, collate_fn=None, drop_last=False, num_workers=0, temperature=100.0): # Computes a mapping from task to dataloaders. self.task_to_dataloaders = {} for task, dataset in tasks_to_datasets.items(): dataloader = TaskDataLoader(task, dataset, batch_size, collate_fn=collate_fn, drop_last=drop_last, num_workers=num_workers) self.task_to_dataloaders.update({task: dataloader}) self.tasknames = list(self.task_to_dataloaders.keys()) # Computes the temperature sampling weights. self.sampling_weights = self.temperature_sampling(self.dataloader_sizes.values(), temperature) self.dataiters = {k: cycle(v) for k, v in self.task_to_dataloaders.items()} def temperature_sampling(self, dataset_sizes, temp): total_size = sum(dataset_sizes) weights = np.array([(size / total_size) ** (1.0 / temp) for size in dataset_sizes]) return weights/np.sum(weights) @property def dataloader_sizes(self): if not hasattr(self, '_dataloader_sizes'): self._dataloader_sizes = {k: len(v) for k, v in self.task_to_dataloaders.items()} return self._dataloader_sizes def __len__(self): return sum(v for k, v in self.dataloader_sizes.items()) def __iter__(self): outputs = {} for i in range(len(self)): taskname = np.random.choice(self.tasknames, p=self.sampling_weights) dataiter = self.dataiters[taskname] outputs["batch"] = next(dataiter) outputs["task"] = taskname yield outputs class Trainer(): """This is the trainer class which is responsible for distributing the data in case of multiple TPUs/GPUs.""" def __init__(self, dataset_names_to_datasets): self.dataset_names_to_datasets = dataset_names_to_datasets self.batch_size = 8 self.local_rank = -1 # this is not -1 in case of multi-gpu self.collate_fn = None self.drop_last = False self.num_workers = 0 def get_sharded_data(self, num_replicas, rank): """Returns the sharded data belonging to the given rank.""" sharded_dataset_names_to_datasets = {} for dataset_name, dataset in self.dataset_names_to_datasets: sharded_data = dataset.shard(num_replicas, rank) sharded_dataset_names_to_datasets.update({dataset_name: sharded_data}) return sharded_dataset_names_to_datasets def get_train_dataset_shards(self): """In case of multiprocessing, returns the sharded data for the given rank.""" if is_torch_tpu_available(): if xm.xrt_world_size() > 1: return self.get_sharded_data(num_replicas=xm.xrt_world_size(), rank=xm.get_ordinal()) elif self.local_rank != -1: return self.get_sharded_data(num_replicas=xm.xrt_world_size(), rank=xm.get_ordinal()) else: return self.dataset_names_to_datasets def get_train_dataloader(self): """Returns the multi-task dataloader, each batch belongs to one task dataset.""" dataset_names_to_datasets = self.get_train_dataset_shards() dataloader = MultiTaskDataLoader(dataset_names_to_datasets, batch_size=self.batch_size, collate_fn=self.collate_fn, drop_last=self.drop_last, num_workers=self.num_workers) return dataloader ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8859/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8858
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8858/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8858/comments
https://api.github.com/repos/huggingface/transformers/issues/8858/events
https://github.com/huggingface/transformers/issues/8858
753,808,192
MDU6SXNzdWU3NTM4MDgxOTI=
8,858
[trainer] add distributed_env to TrainingArguments
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
CONTRIBUTOR
null
As discussed in https://github.com/huggingface/transformers/pull/8823 it's not simple to check whether the downstream code is running under distributed mode or not (currently requires checking `self.args.local_rank != -1` which is far from obvious. So we were discussing about adding a flag like `distributed_env` so that the downstream code could do a much simpler intuitive check. I'm not sure whether we need just True/False for ddp or whether we also need to have another flag if we are under DP as well? @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8858/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8858/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8857
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8857/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8857/comments
https://api.github.com/repos/huggingface/transformers/issues/8857/events
https://github.com/huggingface/transformers/pull/8857
753,794,573
MDExOlB1bGxSZXF1ZXN0NTI5ODA0OTY2
8,857
keys_to_ignore_at_inference -> output_keys_to_ignore_at_inference
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not sure this is really worth the hassle then.", "> Not sure this is really worth the hassle then.\r\n\r\nAgree, let's leave it. It's just a personal cosmetic change, so not worth the change." ]
1,606
1,651
1,606
COLLABORATOR
null
# What does this PR do? @patrickvonplaten mentioned in #8633 he was not happy with a name I picked, so I changed it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8857/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8857/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8857", "html_url": "https://github.com/huggingface/transformers/pull/8857", "diff_url": "https://github.com/huggingface/transformers/pull/8857.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8857.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8856
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8856/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8856/comments
https://api.github.com/repos/huggingface/transformers/issues/8856/events
https://github.com/huggingface/transformers/pull/8856
753,791,456
MDExOlB1bGxSZXF1ZXN0NTI5ODAyNTU5
8,856
Make the big table creation/check platform independent
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
COLLABORATOR
null
# What does this PR do? As @jplu mentioned in #8813, the check that the big table of models/tokenziers is up-to-date (done in `make quality`) requires all three backends installed (plus tokenizers and sentencepiece). This PR amends the script to use the objects in the init (that are always there, thanks to the dummies) instead of the dicts in the auto modules (which are set to None if a specific backend is not installed). In passing, it adds aliases `MT5Tokenizer` and `MT5TokenizerFast` (to `T5Tokenizer` and `T5TokenizerFast` respectively) because otherwise the script does not detect the tokenizers associated to this model, cc @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8856/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8856/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8856", "html_url": "https://github.com/huggingface/transformers/pull/8856", "diff_url": "https://github.com/huggingface/transformers/pull/8856.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8856.patch", "merged_at": 1606841157000 }
https://api.github.com/repos/huggingface/transformers/issues/8855
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8855/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8855/comments
https://api.github.com/repos/huggingface/transformers/issues/8855/events
https://github.com/huggingface/transformers/issues/8855
753,773,533
MDU6SXNzdWU3NTM3NzM1MzM=
8,855
KeyError: 'labels' in training_step in transformers/trainer.py
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This happens in fact after the first epoch, could you think of the reason why this is the case? I tested the dataloader alone and it generates the epochs properly for any number of epochs.", "during the first epoch my batches have all info\r\n`batch inside multitask dict_keys(['input_ids', 'attention_mask', 'decoder_input_ids', 'labels'])`\r\n\r\nAfter first epoch, they miss the labels, my dataloader has an inner dataloader and I checked this is not called anymore after epoch 1.\r\n\r\n`batch inside multitask dict_keys(['input_ids', 'attention_mask', 'decoder_input_ids'])`\r\n\r\nthanks a lot in advance, I am really struggling with this issue, appreciate any helps/thoughts on this. \r\n", "here is the structure of multi-task dataloader, which is my train_dataloader, could you point me what might happen after first epoch? could you point me to any changes you introduce to the train dataloader after epoch 1? thanks \r\n\r\n```\r\nclass TaskDataLoader:\r\n \"\"\"Wrapper around dataloader to keep the task names.\"\"\"\r\n def __init__(self, task_name, dataset, batch_size=8,\r\n collate_fn=None, drop_last=False, num_workers=0, sampler=None):\r\n self.dataset = dataset\r\n self.task_name = task_name\r\n self.data_loader = DataLoader(self.dataset,\r\n batch_size=batch_size,\r\n sampler=sampler,\r\n collate_fn=collate_fn,\r\n drop_last=drop_last,\r\n num_workers=num_workers)\r\n def __len__(self):\r\n return len(self.data_loader)\r\n\r\n def __iter__(self):\r\n for batch in self.data_loader:\r\n print(\"### batch inside taskdataloader \", batch.keys())\r\n yield batch\r\n\r\n\r\n\r\nclass MultiTaskDataLoader:\r\n \"\"\"Given a dictionary of task: dataset, returns a multi-task dataloader\r\n which uses temperature sampling to sample different datasets.\"\"\"\r\n\r\n def __init__(self, tasks_to_datasets, batch_size=8, collate_fn=None,\r\n drop_last=False, num_workers=0, temperature=100.0):\r\n # Computes a mapping from task to dataloaders.\r\n self.task_to_dataloaders = {}\r\n for task, dataset in tasks_to_datasets.items():\r\n dataloader = TaskDataLoader(task, dataset, batch_size,\r\n collate_fn=collate_fn, drop_last=drop_last, num_workers=num_workers)\r\n self.task_to_dataloaders.update({task: dataloader})\r\n self.tasknames = list(self.task_to_dataloaders.keys())\r\n\r\n # Computes the temperature sampling weights.\r\n self.sampling_weights = self.temperature_sampling(self.dataloader_sizes.values(), temperature)\r\n self.dataiters = {k: cycle(v) for k, v in self.task_to_dataloaders.items()}\r\n\r\n def temperature_sampling(self, dataset_sizes, temp):\r\n total_size = sum(dataset_sizes)\r\n weights = np.array([(size / total_size) ** (1.0 / temp) for size in dataset_sizes])\r\n return weights/np.sum(weights)\r\n\r\n @property\r\n def dataloader_sizes(self):\r\n if not hasattr(self, '_dataloader_sizes'):\r\n self._dataloader_sizes = {k: len(v) for k, v in self.task_to_dataloaders.items()}\r\n return self._dataloader_sizes\r\n\r\n def __len__(self):\r\n return sum(v for k, v in self.dataloader_sizes.items())\r\n\r\n def num_examples(self):\r\n return sum(len(dataloader.dataset) for dataloader in self.task_to_dataloaders.values())\r\n\r\n def __iter__(self):\r\n outputs = {}\r\n for i in range(len(self)):\r\n taskname = np.random.choice(self.tasknames, p=self.sampling_weights)\r\n dataiter = self.dataiters[taskname]\r\n #outputs[\"batch\"] = next(dataiter)\r\n #outputs[\"task\"] = taskname\r\n #outputs = next(dataiter)\r\n #outputs[\"task\"] = taskname \r\n outputs = next(dataiter)\r\n print(\"### batch inside multitask \", outputs.keys())\r\n yield outputs\r\n\r\n# Example how this dataloader works.\r\nif __name__ == \"__main__\":\r\n batch_size = 10\r\n num_shards = 2\r\n rank = 0\r\n dataset1 = load_dataset('glue', 'rte', split=\"train[:16]\")\r\n dataset2 = load_dataset('glue', 'cola', split=\"train[:32]\")\r\n trainer = Trainer({'dataset1': dataset1, 'dataset2': dataset2})\r\n dataloader = trainer.get_train_dataloader()\r\n print(\"### length \", len(dataloader))\r\n for epoch in range(1000):\r\n for i, batch in enumerate(dataloader): #islice(dataloader, 5):\r\n print(\"## epoch \", epoch, \" i \", i) #batch) #batch)\r\n\r\n\r\n```\r\n", "solved in #8859" ]
1,606
1,606
1,606
NONE
null
## Environment info - `transformers` version: 3.5.1 - Platform: - Python version: 2.7 - PyTorch version (GPU?): GPU - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: - ### Who can help tokenizers: @mfuntowicz Trainer: @sgugger T5: @patrickvonplaten examples/seq2seq: @patil-suraj ## Information I am using finetune_seq2seq model, the issue arises inside training_step in trainer function, and it gives error that there is not "labels" inside the batch, below please find the stack trace, I spent the whole day I could not get why dataloader does not return back the labels, I would be grateful to point me to some possible reasons why this behaviour might happen and assisting me figuring this out. thanks ``` Traceback (most recent call last): File "finetune_t5_trainer.py", line 250, in <module> main() File "finetune_t5_trainer.py", line 183, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/trainers/trainer.py", line 784, in train tr_loss += self.training_step(model, inputs) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/trainers/trainer.py", line 1125, in training_step loss = self.compute_loss(model, inputs) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/trainers/t5_trainer.py", line 338, in compute_loss labels = inputs.pop("labels") KeyError: 'labels' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8855/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8855/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8854
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8854/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8854/comments
https://api.github.com/repos/huggingface/transformers/issues/8854/events
https://github.com/huggingface/transformers/pull/8854
753,721,647
MDExOlB1bGxSZXF1ZXN0NTI5NzQ1Mzcy
8,854
Fix interaction of return_token_type_ids and add_special_tokens
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,607
1,607
MEMBER
null
Fix https://github.com/huggingface/transformers/issues/8578 It shouldn't raise a warning if `return_token_type_ids` is set to `False`. @thomwolf am I missing something here?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8854/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8854", "html_url": "https://github.com/huggingface/transformers/pull/8854", "diff_url": "https://github.com/huggingface/transformers/pull/8854.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8854.patch", "merged_at": 1607447042000 }
https://api.github.com/repos/huggingface/transformers/issues/8853
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8853/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8853/comments
https://api.github.com/repos/huggingface/transformers/issues/8853/events
https://github.com/huggingface/transformers/pull/8853
753,663,433
MDExOlB1bGxSZXF1ZXN0NTI5Njk3OTIw
8,853
[CI] skip docs-only jobs take #2
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Github actions decided to be down right at the moment where I wanted to monitor :disappointed: ", "Github actions? We are only doing this on circleCI - so far all seems to be working fine.", "Hah, was so disappointed I let it blind me and think it was all CI. Will continue.", "I'm pretty sure the CI should have run on the latest pipeline, as it did in the ones preceding it: https://app.circleci.com/pipelines/github/huggingface/transformers?branch=conda-ci\r\n\r\nChanges were done to .yml files.", "I believe the issue might come from the fact that it is looking at the build commit (d26ca66e2b12c5d5bc30be474d35f6e58dd21808) and comparing it to the previous build's commit (4780c8086a7aa95fc9b610cfe351ec5b226de669). It doesn't find any files, as the previous build commit (4780c8...) doesn't exist anymore, as I force pushed the branch, overwriting that commit's data.\r\n\r\nMaybe checking for empty results in the diff would help in that regard?", "OK, I found when we don't have `pipeline.git.base_revision` defined - it happens when PR is opened via github file edit. As I just did here: https://github.com/huggingface/transformers/pull/8884\r\nYou can see then what happens: https://app.circleci.com/pipelines/github/huggingface/transformers/16617/workflows/fea80cc9-2093-4053-b3c3-f315632ab3a6/jobs/129069\r\n\r\n```\r\n#!/bin/bash -eo pipefail\r\n# pipeline.git.base_revision is not always defined, so only proceed if all external vars are defined\r\nif test -n \"\" && test -n \"32f03035ce5b23abd8a1659f24f04b298319ae78\"\r\nthen\r\n if git diff --name-only ...32f03035ce5b23abd8a1659f24f04b298319ae78 | egrep -qv '\\.(md|rst)$'\r\n then\r\n echo \"Non-docs were modified in this PR, proceeding normally\"\r\n else\r\n echo \"Only docs were modified in this PR, quitting this job\"\r\n circleci step halt\r\n fi\r\nelse\r\n echo \"Can't perform skipping check w/o base_revision defined, continuing the job\"\r\nfi\r\n\r\nCan't perform skipping check w/o base_revision defined, continuing the job\r\n\r\nCircleCI received exit code 0\r\n```\r\n\r\nSo the workaround worked. The job continued normally.\r\n", "~Oh I always open PRs like that, so it was indeed the culprit for the two other times we saw that happen~.\r\nThat does not make sense since the I open the PR on GitHub but the CI runs on a commit. So forget I said anything!", "> I believe the issue might come from the fact that it is looking at the build commit ([d26ca66](https://github.com/huggingface/transformers/commit/d26ca66e2b12c5d5bc30be474d35f6e58dd21808)) and comparing it to the previous build's commit ([4780c80](https://github.com/huggingface/transformers/commit/4780c8086a7aa95fc9b610cfe351ec5b226de669)). It doesn't find any files, as the previous build commit (4780c8...) doesn't exist anymore, as I force pushed the branch, overwriting that commit's data.\r\n> \r\n> Maybe checking for empty results in the diff would help in that regard?\r\n\r\nOK, so this is another edge case. So this pipeline thing is totally borked :( Why can't it give us a normal commit range of the PR.\r\n\r\nSo let's comment out `circleci step halt` and I will work on take #3 that will be much more elaborate. Should I do it or will you? I'm not sure if it's ok to commit directly.", "You can comment it out! Thanks!", "So the proposed logic for take 3 will be:\r\n\r\n1. if pipeline.git.base_revision and pipeline.git.revision are defined\r\n2. if git diff --name-only range returns anything\r\n3. if what it returned in 2 is just docs\r\n4. then skip\r\n", "\r\n\r\n\r\n> You can comment it out! Thanks!\r\n\r\nDone.", "> ~Oh I always open PRs like that, so it was indeed the culprit for the two other times we saw that happen~.\r\n> That does not make sense since the I open the PR on GitHub but the CI runs on a commit. So forget I said anything!\r\n\r\nI'm not sure what you're saying - I think the point is that CircleCI can't find the branching point when the change is done via github file edit.\r\n\r\nNote that `git diff --name-only $(git merge-base --fork-point master)` doesn't work on CirlceCI - otherwise we would have figured out the range ourselves.", "Yes I don't do commit by editing files on GitHub, just the PR part, that's why I scratched what I was saying.", "Thank you for clarifying that, @sgugger. Then this lack of `pipeline.git.base_revision` appears to be random then." ]
1,606
1,606
1,606
CONTRIBUTOR
null
So we discovered CircleCI has a problem and `pipeline.git.base_revision` is unreliable - not always set - breaking the test. https://github.com/huggingface/transformers/pull/8826#issuecomment-735972196 We had a few PRs incorrectly skipping the jobs, as in this example: https://app.circleci.com/pipelines/github/huggingface/transformers/16541/workflows/17b20230-8d7c-4b36-813c-2681f2c8a977/jobs/128232 It's missing `<< pipeline.git.base_revision >>` in ``` if git diff --name-only << pipeline.git.base_revision >>...<< pipeline.git.revision >> | egrep -qv '\.(md|rst)$' ``` resulting in: ``` if git diff --name-only ...5170e5381b9fccdfb9405d665ecee0515efc6453 | egrep -qv '\.(md|rst)$' ``` and hence fails the test. (it's missing the first hash before `...`). This PR checks that the external variables `pipeline.git.base_revision` and `pipeline.git.revision` are set before we do the test. Should one of them be not set, the whole test is skipped and the job continues normally, regardless of whether it's docs only or not. Meanwhile I filed a question about why `pipeline.git.base_revision` is not always set: https://discuss.circleci.com/t/pipeline-git-base-revision-is-often-empty-which-reliable-variable-to-use/38301 Let's merge it at a time that one of us can monitor the next few PRs in case we need to back it out again. If you have to back it out - you only need to comment out this line: `circleci step halt` and leave the invocations in place. @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8853/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8853", "html_url": "https://github.com/huggingface/transformers/pull/8853", "diff_url": "https://github.com/huggingface/transformers/pull/8853.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8853.patch", "merged_at": 1606846526000 }
https://api.github.com/repos/huggingface/transformers/issues/8852
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8852/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8852/comments
https://api.github.com/repos/huggingface/transformers/issues/8852/events
https://github.com/huggingface/transformers/pull/8852
753,586,376
MDExOlB1bGxSZXF1ZXN0NTI5NjM1NjEy
8,852
Remove deprecated `evalutate_during_training`
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Merging this because we need it for the v4.0.0, pinging @jplu so he's aware of the changes made to the TFTrainer." ]
1,606
1,606
1,606
COLLABORATOR
null
# What does this PR do? Replaces the `evaluate_during_training` in examples using the `Trainer` (as well as integrations and tf_trainer) by the new `evaluation_strategy`. Fixes #8792
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8852/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8852/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8852", "html_url": "https://github.com/huggingface/transformers/pull/8852", "diff_url": "https://github.com/huggingface/transformers/pull/8852.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8852.patch", "merged_at": 1606752736000 }
https://api.github.com/repos/huggingface/transformers/issues/8851
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8851/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8851/comments
https://api.github.com/repos/huggingface/transformers/issues/8851/events
https://github.com/huggingface/transformers/pull/8851
753,583,495
MDExOlB1bGxSZXF1ZXN0NTI5NjMzMjc4
8,851
Transfoxl sequence classification
{ "login": "spatil6", "id": 6419011, "node_id": "MDQ6VXNlcjY0MTkwMTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6419011?v=4", "gravatar_id": "", "url": "https://api.github.com/users/spatil6", "html_url": "https://github.com/spatil6", "followers_url": "https://api.github.com/users/spatil6/followers", "following_url": "https://api.github.com/users/spatil6/following{/other_user}", "gists_url": "https://api.github.com/users/spatil6/gists{/gist_id}", "starred_url": "https://api.github.com/users/spatil6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/spatil6/subscriptions", "organizations_url": "https://api.github.com/users/spatil6/orgs", "repos_url": "https://api.github.com/users/spatil6/repos", "events_url": "https://api.github.com/users/spatil6/events{/privacy}", "received_events_url": "https://api.github.com/users/spatil6/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Same as GPT-2, this would benefit from also handling padding on the left; I'll work on this in another PR.", "@LysandreJik , I'll raise new PR, there was some conflicts in it. " ]
1,606
1,606
1,606
CONTRIBUTOR
null
This PR implements Sequence classification for Transformer XL model TransfoxlForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-1,GPT-2) do. Fixes #7623 (Partially) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8851/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8851", "html_url": "https://github.com/huggingface/transformers/pull/8851", "diff_url": "https://github.com/huggingface/transformers/pull/8851.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8851.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8850
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8850/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8850/comments
https://api.github.com/repos/huggingface/transformers/issues/8850/events
https://github.com/huggingface/transformers/pull/8850
753,570,943
MDExOlB1bGxSZXF1ZXN0NTI5NjIzMzQy
8,850
Add a direct link to the big table
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
COLLABORATOR
null
# What does this PR do? This PR adds an anchor to the big table of models/tokenizers to be able to generate a direct link to it, and it adds that link in the README.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8850/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8850", "html_url": "https://github.com/huggingface/transformers/pull/8850", "diff_url": "https://github.com/huggingface/transformers/pull/8850.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8850.patch", "merged_at": 1606750164000 }
https://api.github.com/repos/huggingface/transformers/issues/8849
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8849/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8849/comments
https://api.github.com/repos/huggingface/transformers/issues/8849/events
https://github.com/huggingface/transformers/issues/8849
753,336,432
MDU6SXNzdWU3NTMzMzY0MzI=
8,849
Some unintended things happen in Seq2SeqTrainer example
{ "login": "forest1988", "id": 2755894, "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forest1988", "html_url": "https://github.com/forest1988", "followers_url": "https://api.github.com/users/forest1988/followers", "following_url": "https://api.github.com/users/forest1988/following{/other_user}", "gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}", "starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/forest1988/subscriptions", "organizations_url": "https://api.github.com/users/forest1988/orgs", "repos_url": "https://api.github.com/users/forest1988/repos", "events_url": "https://api.github.com/users/forest1988/events{/privacy}", "received_events_url": "https://api.github.com/users/forest1988/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I have never looked at the `finetune_trainer.py` script so I can't reply for the number of examples part.\r\nFor the MLFlow problem, I don't understand how the value of this parameter could be longer than 250 (if interpreted has a string) could you print it out for debugging?", "Thank you for your quick response!\r\n\r\nFirst, here is more detail of the error message about MLFlow problem. \r\nI apologize that I didn't give the information in the first of this issue.\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"finetune_trainer.py\", line 310, in <module>\r\n main()\r\n File \"finetune_trainer.py\", line 254, in main\r\n trainer.train(\r\n File \"/path/to/transformers/src/transformers/trainer.py\", line 713, in train\r\n self.control = self.callback_handler.on_train_begin(self.args, self.state, self.control)\r\n File \"/path/to/transformers/src/transformers/trainer_callback.py\", line 336, in on_train_begin\r\n return self.call_event(\"on_train_begin\", args, state, control)\r\n File \"/path/to/transformers/src/transformers/trainer_callback.py\", line 374, in call_event\r\n result = getattr(callback, event)(\r\n File \"/path/to/transformers/src/transformers/integrations.py\", line 502, in on_train_begin\r\n self.setup(args, state, model)\r\n File \"/path/to/transformers/src/transformers/integrations.py\", line 497, in setup\r\n mlflow.log_params(dict(combined_dict_items[i : i + MLflowCallback.MAX_LOG_SIZE]))\r\n File \"$HOME/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/mlflow/tracking/fluent.py\", line 470, in log_params\r\n MlflowClient().log_batch(run_id=run_id, metrics=[], params=params_arr, tags=[])\r\n File \"$HOME/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/mlflow/tracking/client.py\", line 830, in log_batch\r\n self._tracking_client.log_batch(run_id, metrics, params, tags)\r\n File \"$HOME/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/mlflow/tracking/_tracking_service/client.py\", line 246, in log_batch\r\n self.store.log_batch(run_id=run_id, metrics=metrics, params=params, tags=tags)\r\n File \"$HOME/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/mlflow/store/tracking/file_store.py\", line 852, in log_batch\r\n _validate_batch_log_data(metrics, params, tags)\r\n File \"$HOME/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/mlflow/utils/validation.py\", line 221, in _validate_batch_log_data\r\n _validate_param(param.key, param.value)\r\n File \"$HOME/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/mlflow/utils/validation.py\", line 101, in _validate_param\r\n _validate_length_limit(\"Param value\", MAX_PARAM_VAL_LENGTH, value)\r\n File \"$HOME/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/mlflow/utils/validation.py\", line 169, in _validate_length_limit\r\n raise MlflowException(\r\nmlflow.exceptions.MlflowException: Param value '{'summarization': {'length_penalty': 1.0, 'max_length': 128, 'min_length': 12, 'num_beams': 4}, 'summarization_cnn': {'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'num_beams': 4}, 'summarization_xsum': {'length_penalty': 1.0, 'max_leng' had length 293, which exceeded length limit of 250\r\n```\r\nThe error message says the error is caused in line 497 of `integrations.py`. \r\n\r\nhttps://github.com/huggingface/transformers/blob/5ced23dc845c76d5851e534234b47a5aa9180d40/src/transformers/integrations.py#L497\r\n\r\nI added logger.info before that.\r\n\r\n```python\r\n # debug\r\n logger.info(\"--- dict --- %s\", dict(combined_dict_items[i : i + MLflowCallback.MAX_LOG_SIZE]))\r\n```\r\n\r\nThen, the output is as below:\r\n\r\n```python\r\n[INFO|integrations.py:499] 2020-11-30 16:39:51,612 >> --- dict --- {'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'use_bfloat16': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'is_encoder_decoder': True, 'is_decoder': False, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 128, 'min_length': 12, 'do_sample': False, 'early_stopping': True, 'num_beams': 4, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 3, 'bad_words_ids': None, 'num_return_sequences': 1, 'chunk_size_feed_forward': 0, 'architectures': ['BartModel', 'BartForConditionalGeneration', 'BartForSequenceClassification'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1', 2: 'LABEL_2'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1, 'LABEL_2': 2}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': 0, 'pad_token_id': 1, 'eos_token_id': 2, 'sep_token_id': None, 'decoder_start_token_id': 2, 'task_specific_params': {'summarization': {'length_penalty': 1.0, 'max_length': 128, 'min_length': 12, 'num_beams': 4}, 'summarization_cnn': {'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'num_beams': 4}, 'summarization_xsum': {'length_penalty': 1.0, 'max_length': 62, 'min_length': 11, 'num_beams': 6}}, 'xla_device': None, '_name_or_path': 'facebook/bart-large', 'classif_dropout': 0.1, 'model_type': 'bart', 'num_hidden_layers': 12, 'vocab_size': 50265, 'd_model': 1024, 'encoder_ffn_dim': 4096, 'encoder_layers': 12, 'encoder_attention_heads': 16, 'encoder_layerdrop': None, 'decoder_layerdrop': None, 'decoder_ffn_dim': 4096, 'decoder_layers': 12, 'decoder_attention_heads': 16, 'max_position_embeddings': 1024, 'init_std': 0.02, 'activation_function': 'gelu', 'scale_embedding': False, 'normalize_embedding': True, 'normalize_before': False, 'add_final_layer_norm': False, 'add_bias_logits': False, 'static_position_embeddings': False, 'attention_dropout': None, 'activation_dropout': 0.1, 'dropout': None, 'classifier_dropout': 0.0, 'extra_pos_embeddings': 2, 'force_bos_token_to_be_generated': False, 'do_blenderbot_90_layernorm': False, 'use_cache': True, 'output_dir': './xsum_bart-large_no_cuda/', 'overwrite_output_dir': False, 'do_train': True, 'do_eval': True, 'do_predict': True, 'model_parallel': False, 'evaluation_strategy': 'epoch', 'prediction_loss_only': False, 'per_device_train_batch_size': 8, 'per_device_eval_batch_size': 8, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'learning_rate': 3e-05, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 3.0, 'max_steps': -1, 'warmup_steps': 0, 'logging_dir': 'runs/Nov30_16-39-34_hamo', 'logging_first_step': False, 'logging_steps': 500, 'save_steps': 500, 'save_total_limit': 5, 'no_cuda': True, 'seed': 42, 'fp16': False}\r\n```\r\n\r\nThe error message seems to indicate the `'task_specific_params'`, so I've checked the length of it.\r\n\r\n```\r\n>>> str = \"{'summarization': {'length_penalty': 1.0, 'max_length': 128, 'min_length': 12, 'num_beams': 4}, 'summarization_cnn': {'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'num_beams': 4}, 'summarization_xsum': {'length_penalty': 1.0, 'max_length': 62, 'min_length': 11, 'num_beams': 6}}\"\r\n>>> len(str)\r\n293\r\n```\r\n\r\nShould I have added some processing to `task_specific_params`?\r\n\r\nThank you.\r\n", "Mmm, that's weird. Pining @noise-field as this was the user that added integration with MLFlow.", "Hi, @forest1988 \r\nThank you for the detailed bug description. MLFlow does limit the parameter length (see: mlflow/mlflow#1976). \r\n\r\nI think we probably need to stop sending arbitrarily nested parameters as string literals because they are:\r\n- not actually single parameters\r\n- can easily overflow the 250 symbols limit\r\n\r\nAnother idea would be to skip long parameters and produce a warning like in case of invalid metrics values. \r\n\r\n@sgugger what would you suggest would be a better option? I could fix it this week.\r\n\r\n", "Maybe we could just skip the args we are trying to send to MLFlow when they get over the limit?", "Hi, I have the same issue. I'm using more or less the standard `run_glue.py` script for finetuning. Most models worked but BART threw the same error as above.\r\nFortunately, this error happened at the start. But, I wrote my own trainer callback handler which failed only after 1-22 hours in the training process and interrupted the training, because some backend API failed to respond.\r\n\r\nI'm not sure whether it might make some sense to just wrap all the callback calls into try-catch blocks so the training will continue in any case?", "Here is my solution https://github.com/huggingface/transformers/issues/8967#issue-758695096", "There was an error where the callback tried to log a value that is too long for MLflow. It was fixed in this PR: #8875", "Hi, \r\nIt seems PR #8875 will solve this issue, are there any problems that block the PR from merging?\r\n\r\n(Added: I'm sorry for the duplicate comments.)\r\n\r\nCurrently, I am dealing with this issue temporarily as follows.\r\n(Trainer works without MLflow integration)\r\n\r\n```\r\n# remove MLflowCalback temporarily\r\nfrom transformers.integrations import MLflowCallback\r\ntrainer.callback_handler.remove_callback(MLflowCallback)\r\n```", "Oh sorry @noise-field it seems like your PR slipped through the cracks of our review process. In general, don't hesitate to ping the person that reviewed your PR if there is no activity in a week and you believe you addressed every comment.", "I'm experiencing this issue again with the trainer when doing NER with `AutoModelForTokenClassification`. The model config containing the `label2id` and `id2label` fields can be quite long when there are many entity types, and it cannot be split under the current strategy. \r\n\r\nExample error when trying to log the `id2label` from model config:\r\n\r\n```\r\n 873 combined_dict_items = list(combined_dict.items()) \r\n 874 for i in range(0, len(combined_dict_items), self._MAX_PARAMS_TAGS_PER_BATCH): \r\n--> 875 self._ml_flow.log_params(dict(combined_dict_items[i : i + self._MAX_PARAMS_TAGS_PER_BATCH])) \r\n 876 mlflow_tags = os.getenv(\"MLFLOW_TAGS\", None)\r\n 877 if mlflow_tags:\r\n\r\n...\r\n\r\nRestException: INVALID_PARAMETER_VALUE: Param value '{0: 'LABEL_0', 1: 'LABEL_1', 2: 'LABEL_2', 3: 'LABEL_3', 4: 'LABEL_4', 5: 'LABEL_5', 6: 'LABEL_6', 7: 'LABEL_7', 8: 'LABEL_8', 9: 'LABEL_9', 10: 'LABEL_10', 11: 'LABEL_11', 12: 'LABEL_12', 13: 'LABEL_13', 14: 'LABEL_14', 15: 'LABEL_15', 16: 'LABEL_16' had length 316, which exceeded length limit of 250\r\n```\r\n\r\nEdit: a work-around is to set `MLFLOW_FLATTEN_PARAMS` to true. This limit has been [increased to 500 in MLFlow](https://github.com/mlflow/mlflow/pull/6358)" ]
1,606
1,661
1,612
CONTRIBUTOR
null
I posted this report in the HuggingFace Forum at first, but @BramVanroy kindly told me to post the report here instead of the forum. The link to the post in the forum: https://discuss.huggingface.co/t/some-unintended-things-happen-in-seq2seqtrainer-example/2361 ## Environment info - `transformers` version: 4.0.0-rc-1 - The latest commit: commit 5ced23dc845c76d5851e534234b47a5aa9180d40 - Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help Trainer: @sgugger examples/seq2seq: @patil-suraj ## Information Model I am using (Bert, XLNet ...): facebook/bart-large The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) I used the XSum dataset following the README of `examples/seq2seq`. ## To reproduce ### What seems strange - The number of data pairs is not correctly recognized. - MLflow cannot treat the params (too long). I wasn’t sure if I should divide these into two issues, but in the end, I decided on one. If it is better to divide them into two, I will modify it. I first noticed this strangeness when I use a different dataset than the those in the example. I again follow the README of `examples/seq2seq` to check if my modification causes the problem or not. Having checked https://github.com/huggingface/transformers/issues/8792, I used `--evaluation_strategy epoch` instead of `--evaluate_during_training`. ### Run official example scripts ``` $ CUDA_VISIBLE_DEVICES=0 python finetune_trainer.py \ --data_dir $XSUM_DIR \ --learning_rate=3e-5 \ --fp16 \ --do_train --do_eval --do_predict \ --evaluation_strategy epoch \ --predict_with_generate \ --n_val 1000 \ --model_name_or_path facebook/bart-large \ --output_dir ./xsum_bart-large/ \ --save_total_limit 5 \ 2>&1 | tee tmp.log ``` ## Expected behavior ### Log ``` [INFO|trainer.py:667] 2020-11-30 08:10:43,836 >> ***** Running training ***** [INFO|trainer.py:668] 2020-11-30 08:10:43,836 >> Num examples = 204016 [INFO|trainer.py:669] 2020-11-30 08:10:43,836 >> Num Epochs = 3 [INFO|trainer.py:670] 2020-11-30 08:10:43,836 >> Instantaneous batch size per device = 8 [INFO|trainer.py:671] 2020-11-30 08:10:43,836 >> Total train batch size (w. parallel, distributed & accumulation) = 8 [INFO|trainer.py:672] 2020-11-30 08:10:43,836 >> Gradient Accumulation steps = 1 [INFO|trainer.py:673] 2020-11-30 08:10:43,836 >> Total optimization steps = 76506 ... mlflow.exceptions.MlflowException: Param value '{'summarization': {'length_penalty': 1.0, 'max_length': 128, 'min_length': 12, 'num_beams': 4}, 'summarization_cnn': {'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'num_beams': 4}, 'summarization_xsum': {'length_penalty': 1.0, 'max_leng' had length 293, which exceeded length limit of 250 ``` ### (Reference) Dataset length ```sh $ cd $XSUM_DIR/ $ wc -l * 11333 test.source 11333 test.target 204017 train.source 204017 train.target 11327 val.source 11327 val.target 453354 total ``` ### Details #### The number of examples shown At first, I tried to use the dataset with 40,000 pairs for training, but it was shown that `Num examples = 39999`. I don't know why, so I've checked the example with the XSum dataset. Checking the number of lengths, it seems the XSum train set used in the example has 204017 pairs, but it is shown `Num examples = 204016` as above. I thought the dataset was supposed to start with the first line, but am I mistaken? For example, is the first line treated as a header? #### MLflow can not treat params in this case As shown above, the length of `param value` exceeds the limit that MLflow can handle. Do I just need to change the settings of MLflow? Or, should I add some modifications to `param value` to be used in MLflow? Thank you in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8849/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8848
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8848/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8848/comments
https://api.github.com/repos/huggingface/transformers/issues/8848/events
https://github.com/huggingface/transformers/pull/8848
753,285,965
MDExOlB1bGxSZXF1ZXN0NTI5Mzk1MTk3
8,848
Fix docstring for language code in mBart
{ "login": "RQuispeC", "id": 28014561, "node_id": "MDQ6VXNlcjI4MDE0NTYx", "avatar_url": "https://avatars.githubusercontent.com/u/28014561?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RQuispeC", "html_url": "https://github.com/RQuispeC", "followers_url": "https://api.github.com/users/RQuispeC/followers", "following_url": "https://api.github.com/users/RQuispeC/following{/other_user}", "gists_url": "https://api.github.com/users/RQuispeC/gists{/gist_id}", "starred_url": "https://api.github.com/users/RQuispeC/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RQuispeC/subscriptions", "organizations_url": "https://api.github.com/users/RQuispeC/orgs", "repos_url": "https://api.github.com/users/RQuispeC/repos", "events_url": "https://api.github.com/users/RQuispeC/events{/privacy}", "received_events_url": "https://api.github.com/users/RQuispeC/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is great thank you!" ]
1,606
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? Fixes #8534 ## Before submitting - [X] This PR fixes a typo or improves the docs. ## Who can review? @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8848/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8848/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8848", "html_url": "https://github.com/huggingface/transformers/pull/8848", "diff_url": "https://github.com/huggingface/transformers/pull/8848.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8848.patch", "merged_at": 1606815877000 }
https://api.github.com/repos/huggingface/transformers/issues/8847
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8847/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8847/comments
https://api.github.com/repos/huggingface/transformers/issues/8847/events
https://github.com/huggingface/transformers/issues/8847
753,272,532
MDU6SXNzdWU3NTMyNzI1MzI=
8,847
KeyError: 'mt5'
{ "login": "astakara48", "id": 58538112, "node_id": "MDQ6VXNlcjU4NTM4MTEy", "avatar_url": "https://avatars.githubusercontent.com/u/58538112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astakara48", "html_url": "https://github.com/astakara48", "followers_url": "https://api.github.com/users/astakara48/followers", "following_url": "https://api.github.com/users/astakara48/following{/other_user}", "gists_url": "https://api.github.com/users/astakara48/gists{/gist_id}", "starred_url": "https://api.github.com/users/astakara48/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astakara48/subscriptions", "organizations_url": "https://api.github.com/users/astakara48/orgs", "repos_url": "https://api.github.com/users/astakara48/repos", "events_url": "https://api.github.com/users/astakara48/events{/privacy}", "received_events_url": "https://api.github.com/users/astakara48/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "pip install transformers==4.0.0rc1 sentencepiece\r\n", "> pip install transformers==4.0.0rc1 sentencepiece\r\n\r\nThank you! You are my hero" ]
1,606
1,606
1,606
NONE
null
I am trying to use the google/mt5 model, but I get KeyError: 'mt5'. How do I fix this? KeyError Traceback (most recent call last) <ipython-input-2-207aa15555f1> in <module> ----> 1 tokenizer = AutoTokenizer.from_pretrained("google/mt5-small") /usr/local/lib/python3.8/dist-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 304 config = kwargs.pop("config", None) 305 if not isinstance(config, PretrainedConfig): --> 306 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) 307 308 if "bert-base-japanese" in str(pretrained_model_name_or_path): /usr/local/lib/python3.8/dist-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 334 335 if "model_type" in config_dict: --> 336 config_class = CONFIG_MAPPING[config_dict["model_type"]] 337 return config_class.from_dict(config_dict, **kwargs) 338 else: KeyError: 'mt5'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8847/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8847/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8846
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8846/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8846/comments
https://api.github.com/repos/huggingface/transformers/issues/8846/events
https://github.com/huggingface/transformers/issues/8846
753,270,206
MDU6SXNzdWU3NTMyNzAyMDY=
8,846
How to globally change the PYTORCH_PRETRAINED_BERT_CACHE path
{ "login": "jzhoubu", "id": 20299401, "node_id": "MDQ6VXNlcjIwMjk5NDAx", "avatar_url": "https://avatars.githubusercontent.com/u/20299401?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jzhoubu", "html_url": "https://github.com/jzhoubu", "followers_url": "https://api.github.com/users/jzhoubu/followers", "following_url": "https://api.github.com/users/jzhoubu/following{/other_user}", "gists_url": "https://api.github.com/users/jzhoubu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jzhoubu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jzhoubu/subscriptions", "organizations_url": "https://api.github.com/users/jzhoubu/orgs", "repos_url": "https://api.github.com/users/jzhoubu/repos", "events_url": "https://api.github.com/users/jzhoubu/events{/privacy}", "received_events_url": "https://api.github.com/users/jzhoubu/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hello! You can set the environment variable `TRANSFORMERS_CACHE` to define which location should be used to store weights.", "@LysandreJik Thanks. \r\nJust to make sure I understand your point correctly. If I run `TRANSFORMERS_CACHE=ELSE_WHERE train.sh` in cmd, are all the downloaded pretrained cache files stored under `ELSE_WHERE` rather than `~/.pytorch_pretrained_bert`?\r\n\r\nIt doesn't work for me. Specifically, I run `TRANSFORMERS_CACHE=./cache_dir bash train.sh` instead of `bash train.sh` and the cache files are still download to my HOME dir `/homes/jzhoubu/.pytorch_pretrained_bert`. Below is the log.\r\n```\r\n11/30/2020 23:12:36 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /homes/jzhoubu/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084\r\n11/30/2020 23:12:42 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /homes/jzhoubu/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba\r\n11/30/2020 23:12:42 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /homes/jzhoubu/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmpwf4pab4l\r\n```\r\n\r\n\r\n", "Ah, given your logs, it seems you're running on a very very old version (`pytorch_pretrained_bert`, which is 1+ years old). While we recommend updating to more recent versions, you should be able to obtain the same behavior by setting the `PYTORCH_PRETRAINED_BERT_CACHE` environment variable instead.\r\n\r\nFor future issues, please always complete the issue template, the information related to your environment is especially important for us to help you correctly.", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
NONE
null
Hi, all. I don't have enough disk space under `~/` to download the pre-trained model. When I run others' experiments, I always need to change their code from ``` model = BertForQuestionAnswering.from_pretrained(args.bert_model, \ cache_dir=PYTORCH_PRETRAINED_BERT_CACHE'distributed_{}'.format(-1)) ``` to something like ``` PYTORCH_PRETRAINED_BERT_CACHE = ELSE_WHERE model = BertForQuestionAnswering.from_pretrained(args.bert_model, \ cache_dir=PYTORCH_PRETRAINED_BERT_CACHE'distributed_{}'.format(-1)) ``` or ``` model = BertForQuestionAnswering.from_pretrained(args.bert_model, \ cache_dir=ELSE_WHERE) ``` For the convenience of those who don't have enough disk under home dir `~/`, I wonder if there is any way to globally change this `PYTORCH_PRETRAINED_BERT_CACHE` value once and for all.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8846/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8845
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8845/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8845/comments
https://api.github.com/repos/huggingface/transformers/issues/8845/events
https://github.com/huggingface/transformers/pull/8845
753,226,767
MDExOlB1bGxSZXF1ZXN0NTI5MzQ3NzEw
8,845
Correct docstring.
{ "login": "Fraser-Greenlee", "id": 8402500, "node_id": "MDQ6VXNlcjg0MDI1MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8402500?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Fraser-Greenlee", "html_url": "https://github.com/Fraser-Greenlee", "followers_url": "https://api.github.com/users/Fraser-Greenlee/followers", "following_url": "https://api.github.com/users/Fraser-Greenlee/following{/other_user}", "gists_url": "https://api.github.com/users/Fraser-Greenlee/gists{/gist_id}", "starred_url": "https://api.github.com/users/Fraser-Greenlee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Fraser-Greenlee/subscriptions", "organizations_url": "https://api.github.com/users/Fraser-Greenlee/orgs", "repos_url": "https://api.github.com/users/Fraser-Greenlee/repos", "events_url": "https://api.github.com/users/Fraser-Greenlee/events{/privacy}", "received_events_url": "https://api.github.com/users/Fraser-Greenlee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,607
1,606
CONTRIBUTOR
null
Related issue: https://github.com/huggingface/transformers/issues/8837 # What does this PR do? Updating the PreTrainedTokenizerBase.pad argument default value docstring to show the correct default value. **Current** docstring: https://github.com/huggingface/transformers/blob/d5b3e56de5376aa85ef46e7f0325139d9e299a41/src/transformers/tokenization_utils_base.py#L2469-L2470 arg: https://github.com/huggingface/transformers/blob/d5b3e56de5376aa85ef46e7f0325139d9e299a41/src/transformers/tokenization_utils_base.py#L2431-L2472 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Fixes #8837 (issue) I'm also curious why this method has default `padding=True`? Other methods (prepare_for_model, encode, __call__, encode_plus, batch_encode_plus) have `padding=False`. Its default means the DataCollatorForLanguageModeling pads input examples which means it can't be simply switched with the default collator in the [example script](https://github.com/huggingface/transformers/blob/master/templates/adding_a_new_example_script/%7B%7Bcookiecutter.directory_name%7D%7D/run_%7B%7Bcookiecutter.example_shortcut%7D%7D.py#L287-L306) without breaking the attention mask. https://github.com/huggingface/transformers/blob/610cb106a216cfb99d840648b576f9502189e4d1/src/transformers/data/data_collator.py#L253 @mfuntowicz @LysandreJik @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8845/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8845", "html_url": "https://github.com/huggingface/transformers/pull/8845", "diff_url": "https://github.com/huggingface/transformers/pull/8845.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8845.patch", "merged_at": 1606746811000 }
https://api.github.com/repos/huggingface/transformers/issues/8844
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8844/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8844/comments
https://api.github.com/repos/huggingface/transformers/issues/8844/events
https://github.com/huggingface/transformers/issues/8844
753,145,983
MDU6SXNzdWU3NTMxNDU5ODM=
8,844
mT5 fine-tuned model generate wrong answer
{ "login": "JejuWayfarer", "id": 49282663, "node_id": "MDQ6VXNlcjQ5MjgyNjYz", "avatar_url": "https://avatars.githubusercontent.com/u/49282663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JejuWayfarer", "html_url": "https://github.com/JejuWayfarer", "followers_url": "https://api.github.com/users/JejuWayfarer/followers", "following_url": "https://api.github.com/users/JejuWayfarer/following{/other_user}", "gists_url": "https://api.github.com/users/JejuWayfarer/gists{/gist_id}", "starred_url": "https://api.github.com/users/JejuWayfarer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JejuWayfarer/subscriptions", "organizations_url": "https://api.github.com/users/JejuWayfarer/orgs", "repos_url": "https://api.github.com/users/JejuWayfarer/repos", "events_url": "https://api.github.com/users/JejuWayfarer/events{/privacy}", "received_events_url": "https://api.github.com/users/JejuWayfarer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @JejuWayfarer, \r\n\r\nit would be awesome if you could post such a question on the forum since it's not really a specific bug report, but more a question/problem on a case-specific training script. Could you maybe post your question on this thread - I'm sure you'll have more luck of getting a good answer there. \r\nThis could be a good thread: https://discuss.huggingface.co/t/mt5-t5v1-1-fine-tuning-results/2098 or open a new one :-) ", "Thank you so much :) I need to use the forum. I will ask there.", "> ## Environment info\r\n> * `transformers` version: 4.0.0-rc-1\r\n> * Platform: Linux\r\n> * Python version: 3.7.9\r\n> * PyTorch version (GPU?): 1.4.0\r\n> * Tensorflow version (GPU?): NA\r\n> * Using GPU in script?: yes\r\n> * Using distributed or parallel set-up in script?: no\r\n> \r\n> ### Who can help\r\n> @patrickvonplaten\r\n> \r\n> ## Information\r\n> Model I am using (Bert, XLNet ...): MT5ForConditionalGeneration.from_pretrained('google/mt5-small')\r\n> \r\n> The problem arises when using:\r\n> \r\n> * [ ] the official example scripts: (give details below)\r\n> * [x] my own modified scripts: (give details below)\r\n> \r\n> The tasks I am working on is:\r\n> \r\n> * [ ] an official GLUE/SQUaD task: (give the name)\r\n> * [x] my own task or dataset: (give details below)\r\n> KoreanSTS dataset\r\n> https://github.com/kakaobrain/KorNLUDatasets\r\n> \r\n> ## To reproduce\r\n> Steps to reproduce the behavior:\r\n> \r\n> 1. fine-tuning Korean STSb dataset on mT5-small model\r\n> 2. Proceed inference using testset\r\n> 3. Strange results\r\n> \r\n> ```ruby\r\n> import pandas as pd\r\n> %matplotlib inline\r\n> import matplotlib.pyplot as plt\r\n> import random\r\n> import time\r\n> import datetime\r\n> import numpy as np\r\n> import os\r\n> from tqdm.notebook import tqdm\r\n> import logging\r\n> import matplotlib.pyplot as plt\r\n> import seaborn as sns\r\n> \r\n> import torch\r\n> import torch.nn as nn\r\n> import torch.optim as optim\r\n> from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler\r\n> \r\n> from transformers import Adafactor, get_linear_schedule_with_warmup, MT5ForConditionalGeneration, T5Tokenizer \r\n> from scipy.stats import spearmanr, pearsonr\r\n> \r\n> tokenizer = T5Tokenizer.from_pretrained('google/mt5-small')\r\n> model = MT5ForConditionalGeneration.from_pretrained('google/mt5-small', return_dict=True)\r\n> \r\n> GPU_NUM = 4\r\n> device = torch.device(f'cuda:{GPU_NUM}' if torch.cuda.is_available() else 'cpu')\r\n> torch.cuda.set_device(device) # change allocation of current GPU\r\n> print ('Current cuda device ', torch.cuda.current_device()) # check\r\n> \r\n> data_path = \"../dataset\"\r\n> train = os.path.join(data_path,'sts-train.tsv')\r\n> test = os.path.join(data_path,'sts-test.tsv')\r\n> dev = os.path.join(data_path,'sts-dev.tsv')\r\n> \r\n> train_data = pd.read_csv(train, delimiter='\\t', error_bad_lines=False)\r\n> test_data = pd.read_csv(test, delimiter='\\t', error_bad_lines=False)\r\n> dev_data = pd.read_csv(dev, delimiter='\\t', error_bad_lines=False)\r\n> \r\n> train_data.score = round(train_data.score*5)/5\r\n> train_data = train_data.applymap(str)\r\n> train_data['input']=''\r\n> for i in range(len(train_data)):\r\n> strs_to_join = []\r\n> strs_to_join = ['stsb sentence1:', train_data.iloc[i]['sentence1'], 'sentence2:', train_data.iloc[i]['sentence2']]\r\n> train_data['input'].iloc[i] = \" \".join(strs_to_join)\r\n> \r\n> \r\n> dev_data.score = round(dev_data.score*5)/5\r\n> dev_data = dev_data.applymap(str)\r\n> dev_data['input']=''\r\n> for i in range(len(dev_data)):\r\n> strs_to_join = []\r\n> strs_to_join = ['stsb sentence1:', dev_data.iloc[i]['sentence1'], 'sentence2:', dev_data.iloc[i]['sentence2']]\r\n> dev_data['input'].iloc[i] = \" \".join(strs_to_join)\r\n> dev_target = dev_data.score\r\n> \r\n> \r\n> test_data.score = round(test_data.score*5)/5\r\n> test_data = test_data.applymap(str)\r\n> test_data['input']=''\r\n> for i in range(len(test_data)):\r\n> strs_to_join = []\r\n> strs_to_join = ['stsb sentence1:', test_data.iloc[i]['sentence1'], 'sentence2:', test_data.iloc[i]['sentence2']]\r\n> test_data['input'].iloc[i] = \" \".join(strs_to_join)\r\n> test_target = test_data.score\r\n> \r\n> train_inputs, train_targets, dev_inputs, dev_targets, test_inputs, test_targets = [],[],[],[],[],[]\r\n> \r\n> for input in train_data.input:\r\n> tokenized_inputs = tokenizer.encode_plus(input, max_length=283, padding='max_length', return_tensors=\"pt\").input_ids\r\n> train_inputs.append(tokenized_inputs)\r\n> \r\n> for target in train_target:\r\n> tokenized_targets = tokenizer.encode_plus(target, max_length=2, padding='max_length', return_tensors=\"pt\").input_ids\r\n> train_targets.append(tokenized_targets)\r\n> \r\n> for input in dev_data.input:\r\n> tokenized_inputs = tokenizer.encode_plus(input, max_length=283, padding='max_length', return_tensors=\"pt\").input_ids\r\n> dev_inputs.append(tokenized_inputs)\r\n> \r\n> for target in dev_target:\r\n> tokenized_targets = tokenizer.encode_plus(target, max_length=2, padding='max_length', return_tensors=\"pt\").input_ids\r\n> dev_targets.append(tokenized_targets)\r\n> \r\n> for input in test_data.input:\r\n> tokenized_inputs = tokenizer.encode_plus(input, max_length=283, padding='max_length', return_tensors=\"pt\").input_ids\r\n> test_inputs.append(tokenized_inputs)\r\n> \r\n> for target in test_target:\r\n> tokenized_targets = tokenizer.encode_plus(target, max_length=2, padding='max_length', return_tensors=\"pt\").input_ids\r\n> test_targets.append(tokenized_targets)\r\n> \r\n> train_input_ids = torch.cat(train_inputs, dim=0)\r\n> train_labels = torch.cat(train_targets, dim=0)\r\n> \r\n> dev_input_ids = torch.cat(dev_inputs, dim=0)\r\n> dev_labels = torch.cat(dev_targets, dim=0)\r\n> \r\n> test_input_ids = torch.cat(test_inputs, dim=0)\r\n> test_labels = torch.cat(test_targets, dim=0)\r\n> \r\n> \r\n> train_dataset = TensorDataset(train_input_ids, train_labels)\r\n> dev_dataset = TensorDataset(dev_input_ids, dev_labels)\r\n> test_dataset = TensorDataset(test_input_ids, test_labels)\r\n> \r\n> \r\n> batch_size = 16\r\n> train_dataloader = DataLoader(\r\n> train_dataset, # The training samples.\r\n> sampler = RandomSampler(train_dataset), # Select batches randomly\r\n> batch_size = batch_size # Trains with this batch size.\r\n> )\r\n> dev_dataloader = DataLoader(\r\n> dev_dataset, # The validation samples.\r\n> sampler = SequentialSampler(dev_dataset), # Pull out batches sequentially.\r\n> batch_size = batch_size # Evaluate with this batch size.\r\n> )\r\n> test_dataloader = DataLoader(\r\n> test_dataset, # The validation samples.\r\n> sampler = SequentialSampler(test_dataset), # Pull out batches sequentially.\r\n> batch_size = batch_size # Evaluate with this batch size.\r\n> )\r\n> \r\n> model.cuda()\r\n> \r\n> params = list(model.named_parameters())\r\n> \r\n> optimizer = Adafactor(model.parameters(), \r\n> lr = 1e-3, # args.learning_rate - default is 5e-5, our notebook had 2e-5\r\n> eps=(1e-30, 1e-3),\r\n> relative_step = False\r\n> )\r\n> \r\n> epochs = 30\r\n> total_steps = len(train_dataloader) * epochs\r\n> scheduler = get_linear_schedule_with_warmup(optimizer, \r\n> num_warmup_steps = 0, # Default value in run_glue.py\r\n> num_training_steps = total_steps)\r\n> \r\n> predictions_all=[]\r\n> seed_val = 0\r\n> \r\n> random.seed(seed_val)\r\n> np.random.seed(seed_val)\r\n> torch.manual_seed(seed_val)\r\n> torch.cuda.manual_seed_all(seed_val)\r\n> \r\n> training_stats = []\r\n> total_t0 = time.time()\r\n> \r\n> for epoch_i in tqdm(range(0, epochs)):\r\n> # Training\r\n> print(\"\")\r\n> print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))\r\n> print('Training...')\r\n> \r\n> t0 = time.time()\r\n> total_train_loss = 0\r\n> \r\n> model.train()\r\n> \r\n> for step, batch in tqdm(enumerate(train_dataloader)):\r\n> \r\n> if step % 50 == 0 and not step == 0:\r\n> elapsed = format_time(time.time() - t0)\r\n> \r\n> print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))\r\n> \r\n> b_input_ids = batch[0].to(device)\r\n> b_labels = batch[1].to(device)\r\n> \r\n> model.zero_grad() \r\n> \r\n> output = model(input_ids=b_input_ids, labels=b_labels, return_dict=True)\r\n> loss = output.loss\r\n> logits = output.logits\r\n> \r\n> total_train_loss += loss.item()\r\n> loss.backward()\r\n> \r\n> torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)\r\n> optimizer.step()\r\n> scheduler.step()\r\n> \r\n> avg_train_loss = total_train_loss / len(train_dataloader) \r\n> training_time = format_time(time.time() - t0)\r\n> print(\"\")\r\n> print(\" Average training loss: {0:.2f}\".format(avg_train_loss))\r\n> print(\" Training epcoh took: {:}\".format(training_time))\r\n> \r\n> \r\n> # Validation\r\n> print(\"\")\r\n> print(\"Running Validation...\")\r\n> \r\n> t0 = time.time()\r\n> \r\n> model.eval()\r\n> \r\n> total_eval_loss = 0\r\n> nb_eval_steps = 0\r\n> \r\n> for batch in tqdm(dev_dataloader):\r\n> b_input_ids = batch[0].to(device)\r\n> b_labels = batch[1].to(device)\r\n> \r\n> with torch.no_grad(): \r\n> output = model(input_ids=b_input_ids, labels=b_labels, return_dict=True)\r\n> loss = output.loss\r\n> logits = output.logits\r\n> \r\n> total_eval_loss += loss.item()\r\n> \r\n> logits = logits.detach().cpu().numpy()\r\n> label_ids = b_labels.to('cpu').numpy()\r\n> \r\n> avg_val_loss = total_eval_loss / len(dev_dataloader)\r\n> validation_time = format_time(time.time() - t0)\r\n> print(\" Validation Loss: {0:.2f}\".format(avg_val_loss))\r\n> print(\" Validation took: {:}\".format(validation_time))\r\n> \r\n> training_stats.append(\r\n> {\r\n> 'epoch': epoch_i + 1,\r\n> 'Training Loss': avg_train_loss,\r\n> 'Valid. Loss': avg_val_loss,\r\n> 'Training Time': training_time,\r\n> 'Validation Time': validation_time\r\n> }\r\n> )\r\n> \r\n> # test\r\n> print('Predicting labels for {:,} test sentences...'.format(len(test_input_ids)))\r\n> model.eval()\r\n> predictions = []\r\n> \r\n> for batch in tqdm(test_dataloader):\r\n> b_input_ids = batch[0].to(device) \r\n> \r\n> with torch.no_grad():\r\n> outputs = model.generate(b_input_ids)\r\n> predictions.append(outputs)\r\n> print('DONE.')\r\n> \r\n> predictions_all.append(predictions)\r\n> \r\n> print(\"\")\r\n> print(\"Training complete!\")\r\n> \r\n> print(\"Total training took {:} (h:mm:ss)\".format(format_time(time.time()-total_t0)))\r\n> \r\n> \r\n> for i in range(10):\r\n> output = model.generate(test_input_ids[i].cuda().reshape(1,-1))\r\n> print(tokenizer.decode(output[0]))\r\n> ```\r\n> \r\n> > <extra_id_0>\r\n> > <extra_id_0>.\r\n> > <extra_id_0>.\r\n> > <extra_id_0>\r\n> > <extra_id_0>합니다.\r\n> > <extra_id_0>\r\n> > <extra_id_0>.\r\n> > <extra_id_0>.\r\n> > <extra_id_0>.\r\n> > <extra_id_0>.\r\n> \r\n> ## Expected behavior\r\n> Thank you for sharing so you can use T5 and mT5 using pytorch.\r\n> \r\n> 1. I fine-tuned the Korean STSB dataset on mt5-small. But the result didn't come out the way I wanted it to come out in a strange shape.\r\n> There are about 5700 training datasets.\r\n> I wonder if there was a mistake in the learning process, or because the data set was insufficient, or because it was less learned.\r\n> 2. Next, when inferencing using mT5(T5), what is the difference between proceeding with model.generate() and doing with model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)?\r\n\r\nI have encountered the same question. Do you have any idea? thank you" ]
1,606
1,633
1,607
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.0.0-rc-1 - Platform: Linux - Python version: 3.7.9 - PyTorch version (GPU?): 1.4.0 - Tensorflow version (GPU?): NA - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): MT5ForConditionalGeneration.from_pretrained('google/mt5-small') The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) KoreanSTS dataset https://github.com/kakaobrain/KorNLUDatasets ## To reproduce Steps to reproduce the behavior: 1. fine-tuning Korean STSb dataset on mT5-small model 2. Proceed inference using testset 3. Strange results <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ```ruby import pandas as pd %matplotlib inline import matplotlib.pyplot as plt import random import time import datetime import numpy as np import os from tqdm.notebook import tqdm import logging import matplotlib.pyplot as plt import seaborn as sns import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler from transformers import Adafactor, get_linear_schedule_with_warmup, MT5ForConditionalGeneration, T5Tokenizer from scipy.stats import spearmanr, pearsonr tokenizer = T5Tokenizer.from_pretrained('google/mt5-small') model = MT5ForConditionalGeneration.from_pretrained('google/mt5-small', return_dict=True) GPU_NUM = 4 device = torch.device(f'cuda:{GPU_NUM}' if torch.cuda.is_available() else 'cpu') torch.cuda.set_device(device) # change allocation of current GPU print ('Current cuda device ', torch.cuda.current_device()) # check data_path = "../dataset" train = os.path.join(data_path,'sts-train.tsv') test = os.path.join(data_path,'sts-test.tsv') dev = os.path.join(data_path,'sts-dev.tsv') train_data = pd.read_csv(train, delimiter='\t', error_bad_lines=False) test_data = pd.read_csv(test, delimiter='\t', error_bad_lines=False) dev_data = pd.read_csv(dev, delimiter='\t', error_bad_lines=False) train_data.score = round(train_data.score*5)/5 train_data = train_data.applymap(str) train_data['input']='' for i in range(len(train_data)): strs_to_join = [] strs_to_join = ['stsb sentence1:', train_data.iloc[i]['sentence1'], 'sentence2:', train_data.iloc[i]['sentence2']] train_data['input'].iloc[i] = " ".join(strs_to_join) dev_data.score = round(dev_data.score*5)/5 dev_data = dev_data.applymap(str) dev_data['input']='' for i in range(len(dev_data)): strs_to_join = [] strs_to_join = ['stsb sentence1:', dev_data.iloc[i]['sentence1'], 'sentence2:', dev_data.iloc[i]['sentence2']] dev_data['input'].iloc[i] = " ".join(strs_to_join) dev_target = dev_data.score test_data.score = round(test_data.score*5)/5 test_data = test_data.applymap(str) test_data['input']='' for i in range(len(test_data)): strs_to_join = [] strs_to_join = ['stsb sentence1:', test_data.iloc[i]['sentence1'], 'sentence2:', test_data.iloc[i]['sentence2']] test_data['input'].iloc[i] = " ".join(strs_to_join) test_target = test_data.score train_inputs, train_targets, dev_inputs, dev_targets, test_inputs, test_targets = [],[],[],[],[],[] for input in train_data.input: tokenized_inputs = tokenizer.encode_plus(input, max_length=283, padding='max_length', return_tensors="pt").input_ids train_inputs.append(tokenized_inputs) for target in train_target: tokenized_targets = tokenizer.encode_plus(target, max_length=2, padding='max_length', return_tensors="pt").input_ids train_targets.append(tokenized_targets) for input in dev_data.input: tokenized_inputs = tokenizer.encode_plus(input, max_length=283, padding='max_length', return_tensors="pt").input_ids dev_inputs.append(tokenized_inputs) for target in dev_target: tokenized_targets = tokenizer.encode_plus(target, max_length=2, padding='max_length', return_tensors="pt").input_ids dev_targets.append(tokenized_targets) for input in test_data.input: tokenized_inputs = tokenizer.encode_plus(input, max_length=283, padding='max_length', return_tensors="pt").input_ids test_inputs.append(tokenized_inputs) for target in test_target: tokenized_targets = tokenizer.encode_plus(target, max_length=2, padding='max_length', return_tensors="pt").input_ids test_targets.append(tokenized_targets) train_input_ids = torch.cat(train_inputs, dim=0) train_labels = torch.cat(train_targets, dim=0) dev_input_ids = torch.cat(dev_inputs, dim=0) dev_labels = torch.cat(dev_targets, dim=0) test_input_ids = torch.cat(test_inputs, dim=0) test_labels = torch.cat(test_targets, dim=0) train_dataset = TensorDataset(train_input_ids, train_labels) dev_dataset = TensorDataset(dev_input_ids, dev_labels) test_dataset = TensorDataset(test_input_ids, test_labels) batch_size = 16 train_dataloader = DataLoader( train_dataset, # The training samples. sampler = RandomSampler(train_dataset), # Select batches randomly batch_size = batch_size # Trains with this batch size. ) dev_dataloader = DataLoader( dev_dataset, # The validation samples. sampler = SequentialSampler(dev_dataset), # Pull out batches sequentially. batch_size = batch_size # Evaluate with this batch size. ) test_dataloader = DataLoader( test_dataset, # The validation samples. sampler = SequentialSampler(test_dataset), # Pull out batches sequentially. batch_size = batch_size # Evaluate with this batch size. ) model.cuda() params = list(model.named_parameters()) optimizer = Adafactor(model.parameters(), lr = 1e-3, # args.learning_rate - default is 5e-5, our notebook had 2e-5 eps=(1e-30, 1e-3), relative_step = False ) epochs = 30 total_steps = len(train_dataloader) * epochs scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = 0, # Default value in run_glue.py num_training_steps = total_steps) predictions_all=[] seed_val = 0 random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) training_stats = [] total_t0 = time.time() for epoch_i in tqdm(range(0, epochs)): # Training print("") print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs)) print('Training...') t0 = time.time() total_train_loss = 0 model.train() for step, batch in tqdm(enumerate(train_dataloader)): if step % 50 == 0 and not step == 0: elapsed = format_time(time.time() - t0) print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed)) b_input_ids = batch[0].to(device) b_labels = batch[1].to(device) model.zero_grad() output = model(input_ids=b_input_ids, labels=b_labels, return_dict=True) loss = output.loss logits = output.logits total_train_loss += loss.item() loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() scheduler.step() avg_train_loss = total_train_loss / len(train_dataloader) training_time = format_time(time.time() - t0) print("") print(" Average training loss: {0:.2f}".format(avg_train_loss)) print(" Training epcoh took: {:}".format(training_time)) # Validation print("") print("Running Validation...") t0 = time.time() model.eval() total_eval_loss = 0 nb_eval_steps = 0 for batch in tqdm(dev_dataloader): b_input_ids = batch[0].to(device) b_labels = batch[1].to(device) with torch.no_grad(): output = model(input_ids=b_input_ids, labels=b_labels, return_dict=True) loss = output.loss logits = output.logits total_eval_loss += loss.item() logits = logits.detach().cpu().numpy() label_ids = b_labels.to('cpu').numpy() avg_val_loss = total_eval_loss / len(dev_dataloader) validation_time = format_time(time.time() - t0) print(" Validation Loss: {0:.2f}".format(avg_val_loss)) print(" Validation took: {:}".format(validation_time)) training_stats.append( { 'epoch': epoch_i + 1, 'Training Loss': avg_train_loss, 'Valid. Loss': avg_val_loss, 'Training Time': training_time, 'Validation Time': validation_time } ) # test print('Predicting labels for {:,} test sentences...'.format(len(test_input_ids))) model.eval() predictions = [] for batch in tqdm(test_dataloader): b_input_ids = batch[0].to(device) with torch.no_grad(): outputs = model.generate(b_input_ids) predictions.append(outputs) print('DONE.') predictions_all.append(predictions) print("") print("Training complete!") print("Total training took {:} (h:mm:ss)".format(format_time(time.time()-total_t0))) for i in range(10): output = model.generate(test_input_ids[i].cuda().reshape(1,-1)) print(tokenizer.decode(output[0])) ``` ><pad> <extra_id_0></s> <pad> <extra_id_0>.</s> <pad> <extra_id_0>.</s> <pad> <extra_id_0></s> <pad> <extra_id_0>합니다.</s> <pad> <extra_id_0></s> <pad> <extra_id_0>.</s> <pad> <extra_id_0>.</s> <pad> <extra_id_0>.</s> <pad> <extra_id_0>.</s> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Thank you for sharing so you can use T5 and mT5 using pytorch. 1. I fine-tuned the Korean STSB dataset on mt5-small. But the result didn't come out the way I wanted it to come out in a strange shape. There are about 5700 training datasets. I wonder if there was a mistake in the learning process, or because the data set was insufficient, or because it was less learned. 2. Next, when inferencing using mT5(T5), what is the difference between proceeding with model.generate() and doing with model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8844/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8843
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8843/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8843/comments
https://api.github.com/repos/huggingface/transformers/issues/8843/events
https://github.com/huggingface/transformers/issues/8843
753,098,335
MDU6SXNzdWU3NTMwOTgzMzU=
8,843
"BertForMaskedLM - pretrained model" cannot resize vocab output size
{ "login": "HenryPaik1", "id": 42961175, "node_id": "MDQ6VXNlcjQyOTYxMTc1", "avatar_url": "https://avatars.githubusercontent.com/u/42961175?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HenryPaik1", "html_url": "https://github.com/HenryPaik1", "followers_url": "https://api.github.com/users/HenryPaik1/followers", "following_url": "https://api.github.com/users/HenryPaik1/following{/other_user}", "gists_url": "https://api.github.com/users/HenryPaik1/gists{/gist_id}", "starred_url": "https://api.github.com/users/HenryPaik1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HenryPaik1/subscriptions", "organizations_url": "https://api.github.com/users/HenryPaik1/orgs", "repos_url": "https://api.github.com/users/HenryPaik1/repos", "events_url": "https://api.github.com/users/HenryPaik1/events{/privacy}", "received_events_url": "https://api.github.com/users/HenryPaik1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I solved it by following code:\r\n```\r\nwith torch.no_grad():\r\n replace_linear = torch.nn.Linear(in_features=768, out_features=len(tokenizer))\r\n replace_linear.weight[:30522,:].copy_(model.cls.predictions.decoder.weight)\r\n model.cls.predictions.decoder = replace_linear\r\n model.cls.predictions.decoder = model.cls.predictions.decoder.requires_grad_(True)\r\n```" ]
1,606
1,606
1,606
NONE
null
### Who can help albert, bert, GPT2, XLM: @LysandreJik ## Information Model I am using (Bert) The problem arises when using: * [ ] the official example scripts: (give details below) * [0] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [0] my own task or dataset: (give details below) ## To reproduce I resized embedding dim, but output dim cannot change. Please kindly refer to the code below: ``` from transformers import BertTokenizer, BertForMaskedLM from transformers import LineByLineTextDataset model = BertForMaskedLM.from_pretrained('bert-base-uncased') ###### tokenizer tokenizer_path = './data/transformer_tokenizer_add_entitymasking_token.pt/' tokenizer = BertTokenizer.from_pretrained(tokenizer_path) model.bert.resize_token_embeddings(len(tokenizer)) model.cls.predictions.decoder.out_features = len(tokenizer) out[0].shape, label.shape >>> (torch.Size([2, 20, 30522]), torch.Size([2, 20])) # 30522 should be len(tokenizer) 30544 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8843/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8842
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8842/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8842/comments
https://api.github.com/repos/huggingface/transformers/issues/8842/events
https://github.com/huggingface/transformers/issues/8842
753,088,379
MDU6SXNzdWU3NTMwODgzNzk=
8,842
T5 generations for pretraining objective degenerate
{ "login": "alexisjihyeross", "id": 22248925, "node_id": "MDQ6VXNlcjIyMjQ4OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22248925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexisjihyeross", "html_url": "https://github.com/alexisjihyeross", "followers_url": "https://api.github.com/users/alexisjihyeross/followers", "following_url": "https://api.github.com/users/alexisjihyeross/following{/other_user}", "gists_url": "https://api.github.com/users/alexisjihyeross/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexisjihyeross/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexisjihyeross/subscriptions", "organizations_url": "https://api.github.com/users/alexisjihyeross/orgs", "repos_url": "https://api.github.com/users/alexisjihyeross/repos", "events_url": "https://api.github.com/users/alexisjihyeross/events{/privacy}", "received_events_url": "https://api.github.com/users/alexisjihyeross/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I guess it's quite normal that the quality degenerates the longer the sequence gets, especially since the output comes from `generate()`...I don't really think that this poses a problem here", "The model for some reason does not want to generate anything after <extra_id_27>, and the same behavior (i.e. problems after 27) occurs for t5-small.\r\n\r\nThe reason you're seeing gibberish after 27 is that the model has already generated an EOS token (id == 1). At this point the model has said \"I'm done generating. I think the sequences has ended\". However, since you told it to use <extra_id_40> as EOS, it continues to try to produce tokens even after producing token id == 1. But the model doesn't know what to do after creating an EOS so you get gibberish.\r\n\r\nIf you don't tell it to use a different EOS token, then it will simply stop generating after hitting <extra_id_27>.\r\n\r\nI tried specifying that token id == 1 is a bad word so that the model won't generate it, but that also doesn't fix the problem.\r\n\r\nStill, I do think it is quite odd that the model cannot generate more than 27 masked tokens. \r\n\r\nIs it possible that this sort of task was only ever done as pretraining? So then the model would always have had teacher forcing, which means that it would never have to \"predict\" so many tokens into the future for this task. \r\n\r\nIf your goal is to fill in many blanks, then you could adapt in one of the following ways:\r\n- masking one token at a time in the full sentence\r\n- starting with the long input sentence and then appending the correct outputs up to the current extra_id. So e.g. for extra_id_2 you would have (+/- the trailing token) :\r\nlong_input_sentence <\\s> <eid 0> of <eid 1> based <eid 2> \r\n\r\nand then the model will generate the next tokens. at each point you would be using the next <eid> as the stopping token.\r\n\r\n-- edit --\r\nI tried the second method and it does not seem to work. It might be because this is an encoder-decoder model, so we need to be seeding the decoder with each additional generation, rather than extending the input sequence. This is possible in the model's forward method but I don't know how to do it with generate.", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
NONE
null
## The issue I am using a pretrained T5 model to generate missing spans as in the pretraining objective. However, I'm finding that these generations deteriorate for longer sequences (usually after around the 25th span or so). Below is an example of this deterioration on a sequence (from the IMDB dataset) where 15% of the tokens have been randomly masked with sentinel tokens. Given that the T5 model was pretrained using sequences of up to 512 tokens with 15% of tokens masked, shouldn't it be possible to obtain good generations on sequences like the one below? Why are generations like this one deteriorating? Thank you! ## Environment info - `transformers` version: 3.5.0 - Platform: Linux-4.15.0-45-generic-x86_64-with-debian-buster-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: True ### Who can help @patrickvonplaten ## To reproduce Steps to reproduce the behavior: ```python import torch from transformers import T5Tokenizer, T5ForConditionalGeneration device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model_name = "t5-base" t5_tokenizer = T5Tokenizer.from_pretrained(model_name) t5_model = T5ForConditionalGeneration.from_pretrained(model_name) t5_model = t5_model.to(device) original_sentence = "ROCK STAR is a well-told Hollywood-style rendition of the tale based on fact actually on how Ripper became Rob Halford's replacement for Judas Priest. Mark Wahlberg poured on his likable boy-ish charm and performed with believable admirably, something he has been known to do since the release of BOOGIE NIGHTS. Stephen Herek, no stranger to musically-themed movies, takes the audience through the wonders of the breakneck lifestyle of an extinct species, the Hair-Metal Rock God. Wahlberg's 'Izzy' acts as the film's host plays the everyman who gets to see his wish come true. His likable character quickly wins over the heart of the viewer, who wants to see him succeed and gets the chance to give him the Metal 'goat horn' hand-sign several times over. The only real complaint with the story is that the supporting cast, namely the other members of the band, were not fleshed out, or even introduced, properly. More interaction with these life-long Rock musicians would have amplified and solidified Izzy's new surroundings. Naturally, ROCK STAR is filled with great music. Rabin's score, the Steel Dragon's original work and plenty of 80's-style Metal hits makes this soundtrack a must-have! Let's all hope that films like ROCK STAR not only give a credibility to a style of music that helped define a generation but also spark a very-needed revival.</s>" sentence = "ROCK STAR is a well-told Hollywood-style rendition<extra_id_0> the tale <extra_id_1> on<extra_id_2> on<extra_id_3> Ri<extra_id_4> became Rob Hal<extra_id_5>'s replacement for Ju<extra_id_6> Priest.<extra_id_7> Wahlberg poured on his likable boy-ish charm and performed with believable admirably, something<extra_id_8>he has been known to do<extra_id_9> the release of BOOGIE NIGHTS. Stephen Herek<extra_id_10> no stranger to musically-themed<extra_id_11>, takes the<extra_id_12> through the<extra_id_13>s<extra_id_14> break<extra_id_15> of<extra_id_16> extinct<extra_id_17>, the<extra_id_18>-Metal Rock<extra_id_19> Wahlberg's 'Izzy' acts as the film's host plays the everyman who gets to see his wish come true. His<extra_id_20>likable character quickly<extra_id_21> the heart of the viewer, who<extra_id_22> to see him succeed and gets the chance to give him the Metal 'goat horn' hand-sign several times over<extra_id_23> The only real complaint with the<extra_id_24> is that the supporting<extra_id_25>,<extra_id_26>namely the other members of<extra_id_27> band, were not fleshed out, or even introduced, properly<extra_id_28> More interaction with these life-long<extra_id_29> musicians would have amplified and solidified Izzy's new surroundings<extra_id_30> Naturally,<extra_id_31>CK STAR is filled<extra_id_32> great music. Rabin's score, the Steel Dragon<extra_id_33>s original work<extra_id_34> of 80's-style Metal<extra_id_35> makes this soundtrack<extra_id_36>a must-have<extra_id_37> all hope that films like ROCK STAR not only give a credibility<extra_id_38> a style of music that helped define a generation but also spark a very-needed revival<extra_id_39></s>" encoded = t5_tokenizer.encode(sentence) print("original sentence: ", original_sentence) print("\nmasked sentence: ", sentence) print("\nnum tokens masked sentence: ", len(encoded)) encoded_tensor = torch.LongTensor(encoded).unsqueeze(0).to(device) eos_token_id = t5_tokenizer.encode("<extra_id_40>")[0] batch = t5_model.generate(encoded_tensor, early_stopping = True, max_length = 300, eos_token_id = eos_token_id, no_repeat_ngram_size = 2, num_beams = 1, num_return_sequences = 1) for b in batch: print("\noutput: ") print(t5_tokenizer.decode(b, skip_special_tokens = False)) ``` output: > original sentence: ROCK STAR is a well-told Hollywood-style rendition of the tale based on fact actually on how Ripper became Rob Halford's replacement for Judas Priest. Mark Wahlberg poured on his likable boy-ish charm and performed with believable admirably, something he has been known to do since the release of BOOGIE NIGHTS. Stephen Herek, no stranger to musically-themed movies, takes the audience through the wonders of the breakneck lifestyle of an extinct species, the Hair-Metal Rock God. Wahlberg's 'Izzy' acts as the film's host plays the everyman who gets to see his wish come true. His likable character quickly wins over the heart of the viewer, who wants to see him succeed and gets the chance to give him the Metal 'goat horn' hand-sign several times over. The only real complaint with the story is that the supporting cast, namely the other members of the band, were not fleshed out, or even introduced, properly. More interaction with these life-long Rock musicians would have amplified and solidified Izzy's new surroundings. Naturally, ROCK STAR is filled with great music. Rabin's score, the Steel Dragon's original work and plenty of 80's-style Metal hits makes this soundtrack a must-have! Let's all hope that films like ROCK STAR not only give a credibility to a style of music that helped define a generation but also spark a very-needed revival.</s> > > masked sentence: ROCK STAR is a well-told Hollywood-style rendition<extra_id_0> the tale <extra_id_1> on<extra_id_2> on<extra_id_3> Ri<extra_id_4> became Rob Hal<extra_id_5>'s replacement for Ju<extra_id_6> Priest.<extra_id_7> Wahlberg poured on his likable boy-ish charm and performed with believable admirably, something<extra_id_8>he has been known to do<extra_id_9> the release of BOOGIE NIGHTS. Stephen Herek<extra_id_10> no stranger to musically-themed<extra_id_11>, takes the<extra_id_12> through the<extra_id_13>s<extra_id_14> break<extra_id_15> of<extra_id_16> extinct<extra_id_17>, the<extra_id_18>-Metal Rock<extra_id_19> Wahlberg's 'Izzy' acts as the film's host plays the everyman who gets to see his wish come true. His<extra_id_20>likable character quickly<extra_id_21> the heart of the viewer, who<extra_id_22> to see him succeed and gets the chance to give him the Metal 'goat horn' hand-sign several times over<extra_id_23> The only real complaint with the<extra_id_24> is that the supporting<extra_id_25>,<extra_id_26>namely the other members of<extra_id_27> band, were not fleshed out, or even introduced, properly<extra_id_28> More interaction with these life-long<extra_id_29> musicians would have amplified and solidified Izzy's new surroundings<extra_id_30> Naturally,<extra_id_31>CK STAR is filled<extra_id_32> great music. Rabin's score, the Steel Dragon<extra_id_33>s original work<extra_id_34> of 80's-style Metal<extra_id_35> makes this soundtrack<extra_id_36>a must-have<extra_id_37> all hope that films like ROCK STAR not only give a credibility<extra_id_38> a style of music that helped define a generation but also spark a very-needed revival<extra_id_39></s> > > num tokens masked sentence: 337 > > output: > <extra_id_0> of<extra_id_1> of the man who<extra_id_2> the day of his death<extra_id_3> the<extra_id_4>m<extra_id_5>e<extra_id_6>das<extra_id_7> Steven<extra_id_8> that<extra_id_9> since<extra_id_10>o,<extra_id_11> films<extra_id_12> audience<extra_id_13> 'rock<extra_id_14>aga' of a<extra_id_15>-up<extra_id_16> the<extra_id_17> CK STAR band<extra_id_18> newest<extra_id_19> band. Steven<extra_id_20> incredibly<extra_id_21> captures<extra_id_22> is eager<extra_id_23>.<extra_id_24> film<extra_id_25> cast<extra_id_26> and<extra_id_27> the -Metal Rock, the band's sailor, and the members of Izzy' emcees, who were <extra_id_1><extra_id_20><extra_id_19>.-n " de an (s, in<extra_id_10> thetgrae and pro also le of'lr to ex si not on<extra_id_5><extra_id_3> I<extra_id_7>" ensemble­ ⁇ » be last for fiia =/ den?<extra_id_26> pour as --) 2 $: + 1 S un former dis spa<extra_id_17> tub root will at both second<extra_id_25> is no bout muscle hard des<extra_id_21>re<extra_id_23> baseball facialw mi& * [...;
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8842/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8841
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8841/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8841/comments
https://api.github.com/repos/huggingface/transformers/issues/8841/events
https://github.com/huggingface/transformers/pull/8841
753,067,396
MDExOlB1bGxSZXF1ZXN0NTI5MjE4MDA4
8,841
Don't warn that models aren't available if Flax is available.
{ "login": "skye", "id": 88808, "node_id": "MDQ6VXNlcjg4ODA4", "avatar_url": "https://avatars.githubusercontent.com/u/88808?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skye", "html_url": "https://github.com/skye", "followers_url": "https://api.github.com/users/skye/followers", "following_url": "https://api.github.com/users/skye/following{/other_user}", "gists_url": "https://api.github.com/users/skye/gists{/gist_id}", "starred_url": "https://api.github.com/users/skye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skye/subscriptions", "organizations_url": "https://api.github.com/users/skye/orgs", "repos_url": "https://api.github.com/users/skye/repos", "events_url": "https://api.github.com/users/skye/events{/privacy}", "received_events_url": "https://api.github.com/users/skye/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,607
1,607
CONTRIBUTOR
null
# What does this PR do? Disables the "Neither PyTorch nor TensorFlow >= 2.0 have been found" warning if Flax has been found. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8841/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8841", "html_url": "https://github.com/huggingface/transformers/pull/8841", "diff_url": "https://github.com/huggingface/transformers/pull/8841.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8841.patch", "merged_at": 1607009593000 }
https://api.github.com/repos/huggingface/transformers/issues/8840
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8840/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8840/comments
https://api.github.com/repos/huggingface/transformers/issues/8840/events
https://github.com/huggingface/transformers/pull/8840
753,004,889
MDExOlB1bGxSZXF1ZXN0NTI5MTcwMzg5
8,840
Diverse number of return sequences for greedy search and sampling generation
{ "login": "LSinev", "id": 12072891, "node_id": "MDQ6VXNlcjEyMDcyODkx", "avatar_url": "https://avatars.githubusercontent.com/u/12072891?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LSinev", "html_url": "https://github.com/LSinev", "followers_url": "https://api.github.com/users/LSinev/followers", "following_url": "https://api.github.com/users/LSinev/following{/other_user}", "gists_url": "https://api.github.com/users/LSinev/gists{/gist_id}", "starred_url": "https://api.github.com/users/LSinev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LSinev/subscriptions", "organizations_url": "https://api.github.com/users/LSinev/orgs", "repos_url": "https://api.github.com/users/LSinev/repos", "events_url": "https://api.github.com/users/LSinev/events{/privacy}", "received_events_url": "https://api.github.com/users/LSinev/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @LSinev, \r\n\r\nthanks for your PR! Since the `generate()` refactor we are really trying to not add any use-case specific code anymore to the individual `generate()` methods. This quickly led to an unmaintainable code previously (especially if start adding lots of those `if` statements) again, so we can't merge the PR as it is in this state. It would be ideal if we could just add a new `LogitsPreprocessor` class or a `LogitsWarperClass`. \r\n\r\nIf this is not sufficient, we have to think a bit more about how to add this PR. \r\nOne thing, I don't really understand is how greedy search with `diverse_sequences=True` is different from Beam Search with `num_return_sequences > 1` -> it seems to be the same thing for me...Also could you add some links/pointers (paper, blog, other codebase) that makes use of this method? ", "> It would be ideal if we could just add a new LogitsPreprocessor class or a LogitsWarperClass.\r\n\r\nOk. I will check if it is possible (but this can move `if` statements inside, as I have to check processing of first token somehow).\r\n\r\n> how greedy search with `diverse_sequences=True` is different from Beam Search with `num_return_sequences > 1` -> it seems to be the same thing for me...\r\n\r\nNever thought about this. Will check.\r\n\r\n> Also could you add some links/pointers (paper, blog, other codebase) that makes use of this method?\r\n\r\nNothing openly available as far as i know. Because of `transformers` popularity, if such possibility not implemented, few developers will try these ideas. Main usecase is additional ranking of generated sequences. As for now, nothing stops to have exactly same sequences as output. It can also be used with probabilities of final sequences from second head of GPT2DoubleHeadsModel, for example (https://github.com/huggingface/transformers/issues/5164). ", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,619
1,614
CONTRIBUTOR
null
# What does this PR do? A new option proposed, `diverse_sequences`, for cases, when one wants really different sequences to be generated (conversational bot, for example). For greedy search, it starts generating new sequences from top `num_return_sequences` tokens (as first tokens in sequences). For sample generation mode, `num_return_sequences` first tokens are taken from a multinomial distribution. Default `diverse_sequences=False` leaves generation in a way it was before this PR. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? GPT2: @patrickvonplaten Text Generation: @TevenLeScao T5: @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8840/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8840", "html_url": "https://github.com/huggingface/transformers/pull/8840", "diff_url": "https://github.com/huggingface/transformers/pull/8840.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8840.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8839
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8839/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8839/comments
https://api.github.com/repos/huggingface/transformers/issues/8839/events
https://github.com/huggingface/transformers/pull/8839
752,946,912
MDExOlB1bGxSZXF1ZXN0NTI5MTMwNjgx
8,839
[Needs Discussion] [WIP] [Docs] Clean tokenizer doc api and add fast tokenizers
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, the person that added the fast sentencepiece tokenizers did not document them in the same PR...\r\nAs for the `__call__` method (and the `encode` one), the idea was to refer to the doc of the superclass, but it makes sense to have them directly accessible for each tokenizer. If we add it, we should also add `encode` I think.", "> Yes, the person that added the fast sentencepiece tokenizers did not document them in the same PR...\r\n> As for the `__call__` method (and the `encode` one), the idea was to refer to the doc of the superclass, but it makes sense to have them directly accessible for each tokenizer. If we add it, we should also add `encode` I think.\r\n\r\nOk great, I'll add `__call__` and `encode` to all tokenizers. Do you think I should add `encode_plus` and `batch_encode_plus` then as well? Or would that clutter the docs too much in your opinion?", "I think `encode_plus` and `batch_encode_plus` are more or less deprecated and should not be in the docs." ]
1,606
1,607
1,607
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Currently, most fast tokenizers do not have their docstrings in the doc. In this PR I want to add all the fast tokenizer docs. Also, I think that the main slow tokenizer `__call__` method should also be added to the docs. @sgugger - before proceeding with all other tokenizer docs, I'd like to hear your opinion on which functions should be included in the docs and which should not. I'd like to add the `__call__` function to all slow tokenizer docs as well as for all fast tokenizer docs. @sgugger what other functions do you think I should add to the fast tokenizer doc? ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8839/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8839", "html_url": "https://github.com/huggingface/transformers/pull/8839", "diff_url": "https://github.com/huggingface/transformers/pull/8839.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8839.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8838
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8838/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8838/comments
https://api.github.com/repos/huggingface/transformers/issues/8838/events
https://github.com/huggingface/transformers/issues/8838
752,938,629
MDU6SXNzdWU3NTI5Mzg2Mjk=
8,838
RuntimeError: found torch.cuda.HalfTensor expected torch.cuda.FloatTensor while fine-tuning RAGSequence-base with custom data
{ "login": "ritvik1512", "id": 3297869, "node_id": "MDQ6VXNlcjMyOTc4Njk=", "avatar_url": "https://avatars.githubusercontent.com/u/3297869?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ritvik1512", "html_url": "https://github.com/ritvik1512", "followers_url": "https://api.github.com/users/ritvik1512/followers", "following_url": "https://api.github.com/users/ritvik1512/following{/other_user}", "gists_url": "https://api.github.com/users/ritvik1512/gists{/gist_id}", "starred_url": "https://api.github.com/users/ritvik1512/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ritvik1512/subscriptions", "organizations_url": "https://api.github.com/users/ritvik1512/orgs", "repos_url": "https://api.github.com/users/ritvik1512/repos", "events_url": "https://api.github.com/users/ritvik1512/events{/privacy}", "received_events_url": "https://api.github.com/users/ritvik1512/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I cannot really reproduce when just running the fine-tuning script @ritvik1512 could you provide us maybe with a complete code example to reproduce the error? A short colab would also be very helpful! \r\n\r\nAlso pinging @lhoestq here since he knows more about RAG fine-tuning", "I haven't experienced issues using --fp16 for apex. A code example to reproduce the error would be welcome indeed", "Thanks for the quick response!\r\nI used the following parameters while running `finetune.py`\r\n\r\n```python\r\npython finetune_rag.py \\\r\n --data_dir $DATA_DIR \\\r\n --output_dir $OUTPUT_DIR \\\r\n --model_name_or_path $MODEL_NAME_OR_PATH \\ #(facebook/rag-sequence-base)\r\n --model_type rag_sequence \\\r\n --gpus 4 \\\r\n --fp16 \\\r\n --index_name custom \\\r\n --passages_path $PASSAGE_PATH \\ #(an extremely short knowledge source with 2 entries for test)\r\n --index_path $INDEX_PATH \\ #(corresponding index)\r\n --do_predict \\\r\n --do_train \\\r\n --n_val -1 \\\r\n --val_check_interval 0.25 \\\r\n --train_batch_size 4 \\ #(reduced from 8 to avoid OOM)\r\n --eval_batch_size 1 \\\r\n --max_source_length 128 \\\r\n --max_target_length 25 \\\r\n --val_max_target_length 25 \\\r\n --test_max_target_length 25 \\\r\n --label_smoothing 0.1 \\\r\n --dropout 0.1 \\\r\n --attention_dropout 0.1 \\\r\n --weight_decay 0.001 \\\r\n --adam_epsilon 1e-08 \\\r\n --max_grad_norm 0.1 \\\r\n --lr_scheduler polynomial \\\r\n --learning_rate 3e-05 \\\r\n --num_train_epochs 1 \\ #(set at 1 for testing)\r\n --warmup_steps 500 \\\r\n --gradient_accumulation_steps 1\r\n```\r\nI will try putting together a colab example very soon, in the meantime let me know if the above snippet helps, thanks!", "@patrickvonplaten @lhoestq apologies for a delayed response. I did try putting together a Google Colab environment [(here)](https://colab.research.google.com/drive/1LlWS6tWWp1Oo4ygUE_J53bBTzUxWlF6J?usp=sharing) replicating my local and with **no** --fp16 I could not get the training to end, at around 20% of the way `tcmalloc: large alloc` warnings show up and the runtime resets with all RAM used.\r\n\r\nMeanwhile using --fp16, the training it fails with an `IndexError: list index out of range` before the training even starts. \r\n\r\nBy the way, I follow the exact code as above (with --fp16) on my local machine and while there the training does do through all the epochs it ends up giving out the error mentioned at the top.\r\n\r\nTherefore I am kind of at a loss as to where the issue is and how to proceed. \r\nPlease let me know if you see any possible ways out, thanks!", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
NONE
null
## Environment info - `transformers` version: 4.0.0-rc-1 (master) - Platform: Linux-3.10.0-862.el7.x86_64-x86_64-with-centos-7.5.1804-Core - Python version: 3.6.5 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help RAG: @patrickvonplaten, @lhoestq ## Information I am fine-tuning RAGSequence-base according to the fine-tuning examples, along with a custom knowledge dataset (<10 lines) The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Generated the faiss index and the custom embeddings. 2. Executed `finetune_rag.py` with the same parameters as the given shell script. (except reducing epoch to 1 for test) 3. After going through the specified epoch, it abruptly ends with the following runtime error. ```python Traceback (most recent call last): File "transformers/examples/rag/finetune_rag.py", line 512, in <module> main() File "transformers/examples/rag/finetune_rag.py", line 507, in main trainer.test() File "/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 721, in test results = self.__test_using_best_weights(ckpt_path, test_dataloaders) File "/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 763, in __test_using_best_weights results = self.fit(model) File "/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 445, in fit results = self.accelerator_backend.train() File "/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 148, in train results = self.ddp_train(process_idx=self.task_idx, model=model) File "/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 269, in ddp_train model = self.trainer.precision_connector.connect(model) File "/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/precision_connector.py", line 78, in connect model, optimizers = self.backend.connect(model, self.trainer.optimizers) File "/lib/python3.6/site-packages/pytorch_lightning/plugins/apex.py", line 37, in connect model, optimizers = self.configure_apex(amp, model, optimizers, self.trainer.amp_level) File "/lib/python3.6/site-packages/pytorch_lightning/plugins/apex.py", line 102, in configure_apex model, optimizers = amp.initialize(model, optimizers, opt_level=amp_level) File "/lib/python3.6/site-packages/apex/amp/frontend.py", line 358, in initialize return _initialize(models, optimizers, _amp_state.opt_properties, num_losses, cast_model_outputs) File "/lib/python3.6/site-packages/apex/amp/_initialize.py", line 171, in _initialize check_params_fp32(models) File "/lib/python3.6/site-packages/apex/amp/_initialize.py", line 87, in check_params_fp32 name, param.type())) File "/lib/python3.6/site-packages/apex/amp/_amp_state.py", line 32, in warn_or_err raise RuntimeError(msg) RuntimeError: Found param model.rag.question_encoder.question_encoder.bert_model.embeddings.word_embeddings.weight with type torch.cuda.HalfTensor, expected torch.cuda.FloatTensor. When using amp.initialize, you do not need to call .half() on your model before passing it, no matter what optimization level you choose. ``` To add, I am using Nvidia's Apex library for --fp16 training. ## Expected behavior fine-tuning to complete, with generated models in the output directory. Have spent a couple hours tinkering with no clue how to proceed, any help would be appreciated. Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8838/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8838/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8837
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8837/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8837/comments
https://api.github.com/repos/huggingface/transformers/issues/8837/events
https://github.com/huggingface/transformers/issues/8837
752,933,166
MDU6SXNzdWU3NTI5MzMxNjY=
8,837
Inconsistent PreTrainedTokenizerBase.pad argument default value & docstring
{ "login": "Fraser-Greenlee", "id": 8402500, "node_id": "MDQ6VXNlcjg0MDI1MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8402500?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Fraser-Greenlee", "html_url": "https://github.com/Fraser-Greenlee", "followers_url": "https://api.github.com/users/Fraser-Greenlee/followers", "following_url": "https://api.github.com/users/Fraser-Greenlee/following{/other_user}", "gists_url": "https://api.github.com/users/Fraser-Greenlee/gists{/gist_id}", "starred_url": "https://api.github.com/users/Fraser-Greenlee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Fraser-Greenlee/subscriptions", "organizations_url": "https://api.github.com/users/Fraser-Greenlee/orgs", "repos_url": "https://api.github.com/users/Fraser-Greenlee/repos", "events_url": "https://api.github.com/users/Fraser-Greenlee/events{/privacy}", "received_events_url": "https://api.github.com/users/Fraser-Greenlee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Seems to have been added in this commit: \r\nhttps://github.com/huggingface/transformers/commit/f3065abdb8805f5beaed9ff1e92ce874e655f5c9#diff-85b29486a884f445b1014a26fecfb189141f2e6b09f4ae701ee758a754fddcc1R2146-R2168\r\nAs part of merge https://github.com/huggingface/transformers/pull/6110", "Hi, indeed! The docs should be changed to reflect the method signature. Do you want to open a PR?" ]
1,606
1,606
1,606
CONTRIBUTOR
null
The docstring states the argument `padding` has a default of `False` but its default is `True` docstring: https://github.com/huggingface/transformers/blob/d5b3e56de5376aa85ef46e7f0325139d9e299a41/src/transformers/tokenization_utils_base.py#L2469-L2470 arg: https://github.com/huggingface/transformers/blob/d5b3e56de5376aa85ef46e7f0325139d9e299a41/src/transformers/tokenization_utils_base.py#L2431-L2472 This causes issues when using `DataCollatorForLanguageModeling` with an already padded dataset as it resets the attention mask.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8837/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8836
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8836/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8836/comments
https://api.github.com/repos/huggingface/transformers/issues/8836/events
https://github.com/huggingface/transformers/pull/8836
752,918,160
MDExOlB1bGxSZXF1ZXN0NTI5MTEwMzE3
8,836
Add utility function for retrieving locally cached models
{ "login": "cdpierse", "id": 8831892, "node_id": "MDQ6VXNlcjg4MzE4OTI=", "avatar_url": "https://avatars.githubusercontent.com/u/8831892?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cdpierse", "html_url": "https://github.com/cdpierse", "followers_url": "https://api.github.com/users/cdpierse/followers", "following_url": "https://api.github.com/users/cdpierse/following{/other_user}", "gists_url": "https://api.github.com/users/cdpierse/gists{/gist_id}", "starred_url": "https://api.github.com/users/cdpierse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cdpierse/subscriptions", "organizations_url": "https://api.github.com/users/cdpierse/orgs", "repos_url": "https://api.github.com/users/cdpierse/repos", "events_url": "https://api.github.com/users/cdpierse/events{/privacy}", "received_events_url": "https://api.github.com/users/cdpierse/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "No problem just pushed the fix now. ", "@LysandreJik Not sure what exactly caused the flax test suite on this to crash. It looks like the docker image crashed.", "I think you need to run the `make style` command on your branch to fix the styling issues. The other test failures seem spurious.", "Thanks!" ]
1,606
1,609
1,609
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds implementation of a small utility function for retrieving a list of locally cached models, discussed in #8803 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8836/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8836", "html_url": "https://github.com/huggingface/transformers/pull/8836", "diff_url": "https://github.com/huggingface/transformers/pull/8836.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8836.patch", "merged_at": 1609772036000 }
https://api.github.com/repos/huggingface/transformers/issues/8835
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8835/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8835/comments
https://api.github.com/repos/huggingface/transformers/issues/8835/events
https://github.com/huggingface/transformers/issues/8835
752,898,496
MDU6SXNzdWU3NTI4OTg0OTY=
8,835
cannot run "examples/language-modeling/run_mlm.py"
{ "login": "HenryPaik1", "id": 42961175, "node_id": "MDQ6VXNlcjQyOTYxMTc1", "avatar_url": "https://avatars.githubusercontent.com/u/42961175?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HenryPaik1", "html_url": "https://github.com/HenryPaik1", "followers_url": "https://api.github.com/users/HenryPaik1/followers", "following_url": "https://api.github.com/users/HenryPaik1/following{/other_user}", "gists_url": "https://api.github.com/users/HenryPaik1/gists{/gist_id}", "starred_url": "https://api.github.com/users/HenryPaik1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HenryPaik1/subscriptions", "organizations_url": "https://api.github.com/users/HenryPaik1/orgs", "repos_url": "https://api.github.com/users/HenryPaik1/repos", "events_url": "https://api.github.com/users/HenryPaik1/events{/privacy}", "received_events_url": "https://api.github.com/users/HenryPaik1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You need to install datasets library: https://github.com/huggingface/datasets\r\n```\r\npip install datasets\r\n```", "Thanks!" ]
1,606
1,606
1,606
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: cmd - Python version: 3.7 - PyTorch version (GPU?): 1.6.0 - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help albert, bert, GPT2, XLM: @LysandreJik ## Information Model I am using (Bert) The problem arises when using: * [o] the official example scripts: (give details below) * [] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [o] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` python examples/language-modeling/run_mlm.py >> Traceback (most recent call last): File "examples/language-modeling/run_mlm.py", line 30, in <module> from datasets import load_dataset ModuleNotFoundError: No module named 'datasets' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8835/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8834
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8834/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8834/comments
https://api.github.com/repos/huggingface/transformers/issues/8834/events
https://github.com/huggingface/transformers/pull/8834
752,823,933
MDExOlB1bGxSZXF1ZXN0NTI5MDQ3MTg3
8,834
Allow none-tensor fields in BatchEncoding
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
CONTRIBUTOR
null
# What does this PR do? This PR allows BatchEncoding to have non-tensor fields. This is useful for example when doing multi-task learning, I can add a task name (str) in the batch, and use it to decide the computation later on. Without this PR, I cannot use `.to('cuda')` if there is str in the batch. I don't know who to tag. @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8834/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8834", "html_url": "https://github.com/huggingface/transformers/pull/8834", "diff_url": "https://github.com/huggingface/transformers/pull/8834.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8834.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8833
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8833/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8833/comments
https://api.github.com/repos/huggingface/transformers/issues/8833/events
https://github.com/huggingface/transformers/issues/8833
752,800,518
MDU6SXNzdWU3NTI4MDA1MTg=
8,833
AutoTokenizer can't find model/tokenizer config.json
{ "login": "abarbosa94", "id": 6608232, "node_id": "MDQ6VXNlcjY2MDgyMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abarbosa94", "html_url": "https://github.com/abarbosa94", "followers_url": "https://api.github.com/users/abarbosa94/followers", "following_url": "https://api.github.com/users/abarbosa94/following{/other_user}", "gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}", "starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions", "organizations_url": "https://api.github.com/users/abarbosa94/orgs", "repos_url": "https://api.github.com/users/abarbosa94/repos", "events_url": "https://api.github.com/users/abarbosa94/events{/privacy}", "received_events_url": "https://api.github.com/users/abarbosa94/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! Do you have a notebook handy so that we can see and try to reproduce the error?", "Thanks for the quick reply! \r\n\r\nI'm doing it in a private repo, I'll try to reproduce it and export it to a public repo asap :)", "I just checked it again and it seems to work smoothly now 🤔 \r\n\r\nI'm closing this and if this happens again in the future, I'll open it again :)\r\n\r\nMy bad.\r\n" ]
1,606
1,608
1,608
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.4.0 - Platform: Linux-4.19.0-8-amd64-x86_64-with-debian-10.3 - Python version: 3.7.3 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> @LysandreJik @mfuntowicz ## Information Model I am using (Bert, XLNet ...): XLM-Roberta, but I've noticed this with other models as well The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1.run `tokenizer = AutoTokenizer.from_pretrained(REF_MODEL)` 2. restart the notebook, for example 3.run `tokenizer = AutoTokenizer.from_pretrained(REF_MODEL)` again <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> The following error occurs: ``` file xlm-roberta-large/config.json not found --------------------------------------------------- OSError Traceback (most recent call last) ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 387 resume_download=resume_download, --> 388 local_files_only=local_files_only, 389 ) ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only) 961 # File, but it doesn't exist. --> 962 raise EnvironmentError("file {} not found".format(url_or_filename)) 963 else: OSError: file xlm-roberta-large/config.json not found During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-11-b51d77705f76> in <module> ----> 1 tokenizer = AutoTokenizer.from_pretrained(f'{REF_MODEL}') ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 304 config = kwargs.pop("config", None) 305 if not isinstance(config, PretrainedConfig): --> 306 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) 307 308 if "bert-base-japanese" in str(pretrained_model_name_or_path): ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 331 {'foo': False} 332 """ --> 333 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) 334 335 if "model_type" in config_dict: ~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 398 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n" 399 ) --> 400 raise EnvironmentError(msg) 401 402 except json.JSONDecodeError: OSError: Can't load config for 'xlm-roberta-large'. Make sure that: - 'xlm-roberta-large' is a correct model identifier listed on 'https://huggingface.co/models' - or 'xlm-roberta-large' is the correct path to a directory containing a config.json file ``` ## Expected behavior I think that it should load it smoothly
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8833/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8832
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8832/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8832/comments
https://api.github.com/repos/huggingface/transformers/issues/8832/events
https://github.com/huggingface/transformers/pull/8832
752,740,879
MDExOlB1bGxSZXF1ZXN0NTI4OTkyMjA2
8,832
[MT5] Add use_cache to config
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Because two PRs happened in parallel I forgot to add `use_cache` to the MT5 config. Thanks a lot for spotting it @jplu ! This model would have been pretty slow at generation for a while otherwise. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8832/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8832", "html_url": "https://github.com/huggingface/transformers/pull/8832", "diff_url": "https://github.com/huggingface/transformers/pull/8832.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8832.patch", "merged_at": 1606589449000 }
https://api.github.com/repos/huggingface/transformers/issues/8831
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8831/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8831/comments
https://api.github.com/repos/huggingface/transformers/issues/8831/events
https://github.com/huggingface/transformers/issues/8831
752,697,658
MDU6SXNzdWU3NTI2OTc2NTg=
8,831
logging.set_verbosity_error() displays dict instead of NotebookTrainingTracker
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "Hi there, I'm afraid this is not a bug bu how the default of `disable_tqdm` behaves. As shown in the [docs](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments), it defaults to `False` if your verbosity level is at warn or lower, `True` otherwise. So you need to pass along `disabel_tqm=False` to override the default when using this logging level.", "Sorry for the silly oversight! \r\n\r\nI saw the `disable_tqdm` flag but didn't realise that \"progress bars\" also referred to the table of metrics. Would a small clarification in the docs be warranted (I'm happy to do it)?", "Yes we can definitely make the docstring clearer." ]
1,606
1,608
1,608
MEMBER
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.0.0-rc-1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @sgugger ## Information Model I am using (Bert, XLNet ...): `distilbert-base-uncased` The problem arises when using: * [] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) Following the [docs](https://huggingface.co/transformers/main_classes/logging.html#logging) I was looking for a way to turn off the warnings that `transformers` shows when loading a new model and believe that `logging.set_verbosity_error()` should do the trick. However, when working in a _Jupyter notebook environment_, I find that setting the logging level to error produces unexpected output from the `Trainer`, namely that I get a `dict` like ``` {'loss': 0.33437405395507813, 'learning_rate': 1.308411214953271e-06, 'epoch': 0.9345794392523364} {'eval_loss': 0.509843111038208, 'eval_matthews_correlation': 0.5011235129840701, 'epoch': 1.0} {'epoch': 1.0} ``` instead of the progress bar and table of metrics: ![Screen Shot 2020-11-28 at 4 21 34 pm](https://user-images.githubusercontent.com/26859204/100519061-caf6eb80-3195-11eb-83c1-47eeee4414cc.png) I encountered the problem in my own experiments, but have also been able to reproduce it in @sgugger's tutorial on the GLUE tasks: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) The task is GLUE ## To reproduce Steps to reproduce the behavior: 1. Set the logging verbosity to _error_ in the first cell of the notebook, i.e. with ``` # Turn off warnings import transformers transformers.logging.set_verbosity_error() ``` 2. Load and encode dataset 3. Configure trainer 4. Run training ``` # With logging.set_verbosity_error() we lose the metrics table :( trainer.train() # Output {'loss': 0.33437405395507813, 'learning_rate': 1.308411214953271e-06, 'epoch': 0.9345794392523364} {'eval_loss': 0.509843111038208, 'eval_matthews_correlation': 0.5011235129840701, 'epoch': 1.0} {'epoch': 1.0} TrainOutput(global_step=535, training_loss=0.34615044994889016) ``` I have trimmed down @sgugger's tutorial to create a reproducible example: https://colab.research.google.com/gist/lewtun/21d44a20f94f480dfa2891f587323ffd/logging-bug-in-trainer.ipynb <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Changing the logging level should not interfere with the display of the progress bar or table of metrics in Jupyter notebooks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8831/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8831/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8830
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8830/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8830/comments
https://api.github.com/repos/huggingface/transformers/issues/8830/events
https://github.com/huggingface/transformers/issues/8830
752,678,395
MDU6SXNzdWU3NTI2NzgzOTU=
8,830
Longform QA demo breaks after clearing cache
{ "login": "huu4ontocord", "id": 8900094, "node_id": "MDQ6VXNlcjg5MDAwOTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4", "gravatar_id": "", "url": "https://api.github.com/users/huu4ontocord", "html_url": "https://github.com/huu4ontocord", "followers_url": "https://api.github.com/users/huu4ontocord/followers", "following_url": "https://api.github.com/users/huu4ontocord/following{/other_user}", "gists_url": "https://api.github.com/users/huu4ontocord/gists{/gist_id}", "starred_url": "https://api.github.com/users/huu4ontocord/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/huu4ontocord/subscriptions", "organizations_url": "https://api.github.com/users/huu4ontocord/orgs", "repos_url": "https://api.github.com/users/huu4ontocord/repos", "events_url": "https://api.github.com/users/huu4ontocord/events{/privacy}", "received_events_url": "https://api.github.com/users/huu4ontocord/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This seems to be an out-of-memory error! @yjernite might know what's up.", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
NONE
null
## Environment info Browser: Chrome, running on windows. Running demo at: http://35.226.96.115:8080/ Linked from https://github.com/huggingface/transformers/tree/master/examples/longform-qa ### Who can help @sgugger ## Information I had clicked "Clear Cache" in the app and when I did another search errors came up in the browers. subsequent runs also produces errors in the browser. RuntimeError: Error in void faiss::gpu::allocMemorySpaceV(faiss::gpu::MemorySpace, void**, size_t) at gpu/utils/MemorySpace.cpp:26: Error: 'err == cudaSuccess' failed: failed to cudaMalloc 8987501056 bytes (error 2 out of memory) Traceback: File "/home/yacine/anaconda3/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/yacine/Code/transformers/examples/longform-qa/eli5_app.py", line 78, in <module> passages, gpu_dense_index, es_client = load_indexes() File "/home/yacine/anaconda3/lib/python3.7/site-packages/streamlit/caching.py", line 591, in wrapped_func return get_or_create_cached_value() File "/home/yacine/anaconda3/lib/python3.7/site-packages/streamlit/caching.py", line 575, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/yacine/Code/transformers/examples/longform-qa/eli5_app.py", line 58, in load_indexes wiki40b_gpu_index_flat.add(wiki40b_passage_reps) # TODO fix for larger GPU File "/home/yacine/anaconda3/lib/python3.7/site-packages/faiss/__init__.py", line 138, in replacement_add self.add_c(n, swig_ptr(x)) File "/home/yacine/anaconda3/lib/python3.7/site-packages/faiss/swigfaiss.py", line 4245, in add return _swigfaiss.GpuIndexFlat_add(self, arg2, x) ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Expected to have the results of the ELI5 search
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8830/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8830/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8829
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8829/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8829/comments
https://api.github.com/repos/huggingface/transformers/issues/8829/events
https://github.com/huggingface/transformers/pull/8829
752,658,946
MDExOlB1bGxSZXF1ZXN0NTI4OTM2MDQ5
8,829
Attempt to fix Flax CI error(s)
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,606
1,606
1,606
MEMBER
null
- Increased the tolerance when comparing Flax et PyTorch output (_~0.00058 on my dev box_) - Removed the `jit` parametrization when running `test_multiple_sentences` because it leads to instabilities - Introduced subtests expliciting what we're doing by enabling / disabling JIT.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8829/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8829", "html_url": "https://github.com/huggingface/transformers/pull/8829", "diff_url": "https://github.com/huggingface/transformers/pull/8829.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8829.patch", "merged_at": 1606761798000 }
https://api.github.com/repos/huggingface/transformers/issues/8828
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8828/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8828/comments
https://api.github.com/repos/huggingface/transformers/issues/8828/events
https://github.com/huggingface/transformers/pull/8828
752,523,682
MDExOlB1bGxSZXF1ZXN0NTI4ODI2NTgw
8,828
token-classification: use is_world_process_zero instead of is_world_master()
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "/cc @sgugger :hugs: " ]
1,606
1,606
1,606
COLLABORATOR
null
Hi, I just found some leftovers of the `is_world_master()` function in the token classification example. As this method has been removed, the following error message is thrown when using the `do_prediction` option: ```bash Traceback (most recent call last): File "run_ner.py", line 394, in <module> main() File "run_ner.py", line 372, in main if trainer.is_world_master(): AttributeError: 'Trainer' object has no attribute 'is_world_master' ``` This PR fixes it and uses the new `is_world_process_zero()` method instead!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8828/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8828", "html_url": "https://github.com/huggingface/transformers/pull/8828", "diff_url": "https://github.com/huggingface/transformers/pull/8828.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8828.patch", "merged_at": 1606746116000 }
https://api.github.com/repos/huggingface/transformers/issues/8827
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8827/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8827/comments
https://api.github.com/repos/huggingface/transformers/issues/8827/events
https://github.com/huggingface/transformers/issues/8827
752,482,863
MDU6SXNzdWU3NTI0ODI4NjM=
8,827
error: sentencepiece 0.1.94 is installed but sentencepiece==0.1.91 is required by {'transformers'}
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "The error tells you it wants SentencePiece 0.1.91, can you install that version instead?\r\n```\r\npip install -U sentencepiece==0.1.91\r\n```\r\nWe should update the requirements.txt file to reflect this. Do you want to open a PR?", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,606
1,614
1,614
NONE
null
## Environment info - `transformers` version: 3.5.1 - Platform: google cloud - Python version: 3.7 - PyTorch version (GPU?): TPU, 1.6 - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: - ### Who can help tokenizers: @mfuntowicz Trainer: @sgugger examples/distillation: @VictorSanh examples/seq2seq: @patil-suraj ## Information I am using requirements.txt file inside examples, and installing it, it fails with this error: error: sentencepiece 0.1.94 is installed but sentencepiece==0.1.91 is required by {'transformers'} here is my setup script, as mentioned in requirements of transformers 3.5.1 for running the examples. thank you ``` install_requires=[ 'sentencepiece != 0.1.92', 'transformers==3.5.1', 'tensorboard', 'scikit-learn', 'seqeval', 'psutil', 'sacrebleu', 'rouge-score', 'tensorflow_datasets', 'pytorch-lightning==1.0.4', 'matplotlib', 'git-python==1.0.3', 'faiss-cpu', 'streamlit', 'elasticsearch', 'nltk', 'pandas', 'datasets', 'fire', 'pytest', 'conllu', 'tf-nightly', 'google-cloud-storage', ], ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8827/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8826
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8826/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8826/comments
https://api.github.com/repos/huggingface/transformers/issues/8826/events
https://github.com/huggingface/transformers/pull/8826
752,481,173
MDExOlB1bGxSZXF1ZXN0NTI4Nzk1MTgx
8,826
[CI] implement job skipping for doc-only PRs
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "That's a great idea!", "Btw this is why GitHub Actions are so cool: this is built-in", "ok, this is good now.", "> Btw this is why GitHub Actions are so cool: this is built-in\r\n\r\nThis among others :) the only pain point is the lack of anchors, which would be a godsend given our current YAML files.", "There is a problem with the test, which seems to skip the tests as soon as there is at least one doc file, even if code files have also been modified. #8852 gives an example of this happening.\r\nI have commented out the line `skip-job-on-doc-only-changes` in [this commit](https://github.com/huggingface/transformers/commit/08e707633ca5e48b3c0d068522ccac36e623b09d) to have the CI work while waiting for a fix on your side @stas00.", "@sgugger, can you show me a a specific example of this behavior? Looking at PR you linked and other commits since the skip rule has been merged I don't see this happening.\r\n\r\nFor example, https://github.com/huggingface/transformers/commit/75f8100fc77e4124aa643c45c4a4943cd5ee47cd has both docs and code files and it has the skip rule activated - and none of the jobs were skipped, e.g. here is one job from that PR:\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/16543/workflows/ac01a5da-d9dc-4e23-8ee7-9e562263f030/jobs/128245\r\n\r\nThank you!\r\n\r\np.s. while we are sorting this new one out, you don't need to comment out all the invocation of the command, just the 'circleci halt' line in the command.", "Arg, I force-pushed my rebase so we lost the commit where the problem was happening. I assure you that PR had all tests skipped except `build_doc`.", "I totally believe you what you saw, but this must have been some edge case that I haven't accounted for and I need to see what it was. As I have shown in the 2 links in the comment above yours the rule did work correctly for a PR with 2 docs and one py file, so what you suggested that it skips as soon as there is at least one doc doesn't seem to be the case.\r\n\r\nDo you know at least what files were involved in that commit where the undesired skip has occurred? or perhaps it's still in your local branch?\r\n\r\nThe logic is simple:\r\n1) it gets the modified file names\r\n2) it then removes any docs that match `\\.(md|rst)$`\r\n3) \r\n - a. if there are any files left, we have non-docs - normal behavior ensues \r\n - b. if there are no files left, we have only docs - and it skips\r\n\r\n", "I've deleted that branch but there was only one commit with the 9 files you see in the PR.", "OK, I created a PR https://github.com/huggingface/transformers/pull/8853 with the exact same files by just reverting your commit 553029909620455e040a49032a9c45f6a5f0cd52 for the sake of the test (not intending to merge it) - plus `.circleci/config.yml` to restore the skipping rule - none of the checks has been skipped.\r\n\r\nMoreover, you said:\r\n\r\n> There is a problem with the test, which seems to skip the tests as soon as there is at least one doc file\r\n\r\nbut your commit had no doc files.\r\n\r\nDo you want to try to re-enable the rule and monitor to catch a potential edge case that you saw but we no longer know what it was? And if you run into it and I will monitor too, let's make sure to save the branch so that we could reproduce the problem.\r\n\r\nTo quickly disable the skip just this line needs to be commented out:\r\nhttps://github.com/huggingface/transformers/blob/dfec84db3fdce1079f01f1bc8dfaf21db2ccaba1/.circleci/config.yml#L19\r\n\r\nThe only tricky part with monitoring is that it won't affect older branches that weren't rebased or created after the skip was enabled.\r\n\r\nOh and I apologize if this causes a temporary potential hurdle in normal PR process - hopefully we will sort it out quickly and overall things will be better in the long run.", "If that helps, [here](https://github.com/huggingface/transformers/pull/8850/commits/5170e5381b9fccdfb9405d665ecee0515efc6453) is another commit with rst, md and py files where the tests were all skipped: \r\nThe corresponding PR is #8850", "Ah, great! That helped a lot, @sgugger - Thank you for finding it!\r\n\r\nIt appears to be a bug in circleCI (https://app.circleci.com/pipelines/github/huggingface/transformers/16541/workflows/17b20230-8d7c-4b36-813c-2681f2c8a977/jobs/128232)\r\n\r\nIt's missing `<< pipeline.git.base_revision >>` in\r\n\r\n```\r\nif git diff --name-only << pipeline.git.base_revision >>...<< pipeline.git.revision >> | egrep -qv '\\.(md|rst)$'\r\n```\r\nresulting in:\r\n```\r\nif git diff --name-only ...5170e5381b9fccdfb9405d665ecee0515efc6453 | egrep -qv '\\.(md|rst)$'\r\n```\r\nand hence fails the test. (it's missing the first hash before `...`).\r\n\r\nBack to the drawing board.", "Can you think of why these few commits could be missing `pipeline.git.base_revision` - was there something special about those?", "I have no idea, but if CircleCI is flaky like this, I guess we won't be able to use this to determine whether the commit contains only doc files or not...", "We still can, by checking whether `pipeline.git.base_revision` is defined, and never skip if it's not. If that's the best we can do, it won't always save resources. \r\n\r\nBut let me research first why is it not defined at times.", "Workaround: https://github.com/huggingface/transformers/pull/8853" ]
1,606
1,606
1,606
CONTRIBUTOR
null
Let's save some time and money. This PR: * [x] skips most jobs when the only change is in `\.(md|rst)$` files. I tested this with various types of files and it seems to do the right thing. But if we merge let's monitor that I didn't miss some use case and we end up with broken master if some CI jobs didn't run. - pros: obvious - cons: I don't like that the skipped CI job status appear as completed normally, even though it didn't quite run. Let's hope circleci comes up with some better way of indicating that the job was skipped. --------------- how it was done: `git merge-base --fork-point master` to get the commit range didn't work at all, even though that's what we use for the `fixup` `Makefile` target. Other suggestions I found didn't work either. At the end I found https://circleci.com/docs/2.0/pipeline-variables/ to get the correct commit range: ``` git diff --name-only << pipeline.git.base_revision >>...<< pipeline.git.revision >> ``` and now all is good. **credits**: the `circle step halt` idea comes from this blog https://yu-ishikawa.medium.com/reusable-a-circleci-command-to-halt-if-no-changed-target-files-e87c6b0af82b @LysandreJik, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8826/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8826/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8826", "html_url": "https://github.com/huggingface/transformers/pull/8826", "diff_url": "https://github.com/huggingface/transformers/pull/8826.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8826.patch", "merged_at": 1606667490000 }