url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/7621
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7621/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7621/comments
https://api.github.com/repos/huggingface/transformers/issues/7621/events
https://github.com/huggingface/transformers/pull/7621
716,032,978
MDExOlB1bGxSZXF1ZXN0NDk4ODQ0NTgx
7,621
[No merge] TF integration testing
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik Is this PR done with your last changes in the tests?", "Ah, I had forgotten about this. I'll rebase and ping you for review\r\n", "should be good for review @jplu ", "They're not the same as they don't rely on the full checkpoints but on some random tiny ones, to make the CI faster.\r\nIt does test every same aspect, however: the weights loading, the full inference, the expected results." ]
1,602
1,605
1,605
MEMBER
null
Adds integrationt tests for BERT, ELECTRA and Longformer to ensure that PR such as https://github.com/huggingface/transformers/pull/7605 do not impact the current state of models. RoBERTa not done because it's already done. Patches a bug with the `ElectraForPreTraining` when batch size = 1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7621/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7621/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7621", "html_url": "https://github.com/huggingface/transformers/pull/7621", "diff_url": "https://github.com/huggingface/transformers/pull/7621.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7621.patch", "merged_at": 1605034954000 }
https://api.github.com/repos/huggingface/transformers/issues/7620
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7620/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7620/comments
https://api.github.com/repos/huggingface/transformers/issues/7620/events
https://github.com/huggingface/transformers/issues/7620
716,019,928
MDU6SXNzdWU3MTYwMTk5Mjg=
7,620
Downloading DPR model ('facebook/dpr-ctx_encoder-single-nq-base')
{ "login": "abesalom10", "id": 57544411, "node_id": "MDQ6VXNlcjU3NTQ0NDEx", "avatar_url": "https://avatars.githubusercontent.com/u/57544411?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abesalom10", "html_url": "https://github.com/abesalom10", "followers_url": "https://api.github.com/users/abesalom10/followers", "following_url": "https://api.github.com/users/abesalom10/following{/other_user}", "gists_url": "https://api.github.com/users/abesalom10/gists{/gist_id}", "starred_url": "https://api.github.com/users/abesalom10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abesalom10/subscriptions", "organizations_url": "https://api.github.com/users/abesalom10/orgs", "repos_url": "https://api.github.com/users/abesalom10/repos", "events_url": "https://api.github.com/users/abesalom10/events{/privacy}", "received_events_url": "https://api.github.com/users/abesalom10/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi, could you mention the issue you're having?", "I want to use a natural question dataset and model trained on that. I have seen this code:\r\n```\r\nfrom transformers import DPRReader, DPRReaderTokenizer\r\ntokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')\r\nmodel = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base', return_dict=True)\r\nencoded_inputs = tokenizer(\r\n questions=[\"What is love ?\"],\r\n titles=[\"Haddaway\"],\r\n texts=[\"'What Is Love' is a song recorded by the artist Haddaway\"],\r\n return_tensors='pt'\r\n )\r\noutputs = model(**encoded_inputs)\r\nstart_logits = outputs.stat_logits\r\nend_logits = outputs.end_logits\r\nrelevance_logits = outputs.relevance_logits\r\n```\r\nBut from here I can not know how to return the answer from the model. It gives us just starting and ending position of the answer as far as I understand.\r\nAnd also CAn I use my document as a context, and model search answer in this document. If yes please tell me how it is possible.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,602
1,608
1,608
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!--Hi, I want to use model build on the natural questions. I have to question: 1.. I see an example of the above-mentioned model. here it is ``` from transformers import DPRReader, DPRReaderTokenizer tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base') model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base', return_dict=True) encoded_inputs = tokenizer( questions=["What is love ?"], titles=["Haddaway"], texts=["'What Is Love' is a song recorded by the artist Haddaway"], return_tensors='pt' ) outputs = model(**encoded_inputs) start_logits = outputs.stat_logits end_logits = outputs.end_logits relevance_logits = outputs.relevance_logits ``` So I want to run the command which gives me the answers here. It gives me just the start and end position of answer. 2.. Can I use my context (my document) where I want to search model the answer? is it possible here? Thanks in advance--> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7620/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7620/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7619
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7619/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7619/comments
https://api.github.com/repos/huggingface/transformers/issues/7619/events
https://github.com/huggingface/transformers/pull/7619
715,966,976
MDExOlB1bGxSZXF1ZXN0NDk4Nzg4OTcz
7,619
Enhance TFTrainer.save_model()
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> \r\n> \r\n> Awesome! Did you try it with a usual training example to see if everything is ok?\r\n\r\nYes, with example/text-classification.\r\nI didn't check yet with a usual `tf.keras.models.Model` (i.e. not TFPretrainedModel). But when I continue with `test_trainer_tf.py`, it will be tested.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Hi, @sgugger & @LysandreJik ,\r\n\r\nI didn't realized that this PR is not merged into master. Since it has been for some time, I rebased the branch. All the suggestions from @sgugger are done in the latest version.\r\n\r\nIt would be great if you can merge this PR. Thanks!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,602
1,651
1,621
COLLABORATOR
null
# What does this PR do? @jplu , could you close PR #7597, please? This PR is a clean one, without the file `modeling_tf_utils` being changed at all. Currently, TFTrainer.save_model() raises errors if the model is not TFPreTrainedModel . However Trainer works fine with torch.nn.modules.Module. This is a step to make TFTrainer work with usual tf.keras.models.Model models. The idea (from @sgugger) is that a user is building their own models that work like ours (e.g., return the loss as the first output) and can train them with Trainer. Furthermore, a SavedModel is also saved using tf.saved_model.save(). For @jplu and @sgugger .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7619/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7619", "html_url": "https://github.com/huggingface/transformers/pull/7619", "diff_url": "https://github.com/huggingface/transformers/pull/7619.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7619.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7618
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7618/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7618/comments
https://api.github.com/repos/huggingface/transformers/issues/7618/events
https://github.com/huggingface/transformers/issues/7618
715,842,402
MDU6SXNzdWU3MTU4NDI0MDI=
7,618
position_ids parameter cannot work with past parameter for GPT2Model during batch inference
{ "login": "gmftbyGMFTBY", "id": 27548710, "node_id": "MDQ6VXNlcjI3NTQ4NzEw", "avatar_url": "https://avatars.githubusercontent.com/u/27548710?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gmftbyGMFTBY", "html_url": "https://github.com/gmftbyGMFTBY", "followers_url": "https://api.github.com/users/gmftbyGMFTBY/followers", "following_url": "https://api.github.com/users/gmftbyGMFTBY/following{/other_user}", "gists_url": "https://api.github.com/users/gmftbyGMFTBY/gists{/gist_id}", "starred_url": "https://api.github.com/users/gmftbyGMFTBY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gmftbyGMFTBY/subscriptions", "organizations_url": "https://api.github.com/users/gmftbyGMFTBY/orgs", "repos_url": "https://api.github.com/users/gmftbyGMFTBY/repos", "events_url": "https://api.github.com/users/gmftbyGMFTBY/events{/privacy}", "received_events_url": "https://api.github.com/users/gmftbyGMFTBY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @gmftbyGMFTBY - I think the better approach to tackle your problem would actually be this one here: https://github.com/huggingface/transformers/issues/3021#issuecomment-681792104 .\r\n\r\nThis way you should not run into any errors regarding the position_ids", "Hey, @patrickvonplaten, it works for me. Thank you so mcuh." ]
1,602
1,602
1,602
CONTRIBUTOR
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> Hi, @patrickvonplaten. I just finished training a GPT-2 model by using the `GPT2Model` class, and I try to speed up the inference by using the batch inference (very similar to #3021 ). However, I found that the `position_ids` parameter cannot work with the `past` parameter, and it raises the Error: `RuntimeError: The size of tensor a (32) must match the size of tensor b (2592) at non-singleton dimension 0`. ![微信图片_20201007002211](https://user-images.githubusercontent.com/27548710/95229447-2b6f4980-0833-11eb-8cf4-07465f8935ec.png) I found the exception happens in the `modeling_gpt2.py` line 471, so I check the original codes of the `GPT2Model` class. In `modeling_gpt2.py` line 426 to line 427 (line 558 to line 559 in the lastest original code): ```python if position_ids is not None: position_ids = position_ids.view(-1, input_shape[-1]) ``` Actually, during using the `past` parameter for speeding up the inference, the input_shape is `[batch_size, 1]`, but the `position_ids` is `[batch_size, seq_length]`. So, when we use the `past` and `position_ids` at the same time, the position_ids will be converted into a wrong shape `[batch_size*seq_length, 1]` (the shape we want should be `[batch_size, seq_length]`). For example, as shown in the figure, the `batch_size` is 32 and the `seq_length` is 81, and the generated position_ids shape is `[2592, 1]` (32*81=2592), but the correct position_ids shape should be `[32, 81]`. So I think it may be a bug, but I am not so sure about it. Can you guys help me to figure it out? Here are the environment variables in my system: * transformers==2.11.0 * pytorch==1.5.1 * python==3.6.11 <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7618/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7617
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7617/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7617/comments
https://api.github.com/repos/huggingface/transformers/issues/7617/events
https://github.com/huggingface/transformers/issues/7617
715,840,680
MDU6SXNzdWU3MTU4NDA2ODA=
7,617
OSError: Can't load config for saved_model when deploying on EC2.
{ "login": "katreparitosh", "id": 42617598, "node_id": "MDQ6VXNlcjQyNjE3NTk4", "avatar_url": "https://avatars.githubusercontent.com/u/42617598?v=4", "gravatar_id": "", "url": "https://api.github.com/users/katreparitosh", "html_url": "https://github.com/katreparitosh", "followers_url": "https://api.github.com/users/katreparitosh/followers", "following_url": "https://api.github.com/users/katreparitosh/following{/other_user}", "gists_url": "https://api.github.com/users/katreparitosh/gists{/gist_id}", "starred_url": "https://api.github.com/users/katreparitosh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/katreparitosh/subscriptions", "organizations_url": "https://api.github.com/users/katreparitosh/orgs", "repos_url": "https://api.github.com/users/katreparitosh/repos", "events_url": "https://api.github.com/users/katreparitosh/events{/privacy}", "received_events_url": "https://api.github.com/users/katreparitosh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This can happen if you're fetching a model from S3 but you have no internet access, or if you're using an incorrect URL to a local folder.", "Hello @LysandreJik \r\n\r\nI have uploaded the saved model in a folder on my EC2 instance. Therefore, the location for the model is from the instance file directory which I have verified multiple times.\r\n\r\nAlso, the model functions properly when deployed using Flask over localhost. \r\n\r\nDo I need to download the pre-trained models as a command in the dockerfile? \r\n\r\nKindly help.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I'm seeing the same issue on my community model\r\n\r\n```\r\nOSError: Can't load config for 'model_path'. Make sure that:\r\n\r\n- 'model_path' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 'model_path' is the correct path to a directory containing a config.json file\r\n```\r\n\r\nI verified the model files are there. How can i work around this?", "It would be helpful if you opened a new issue with everything related to your environment, as well as the code you use. Are you also on EC2? What is in the `model_path` folder? What is your `transformers` version? All that's asked in the template would be very helpful for us to help you.", "@LysandreJik Thanks for the reply! I created a new issue https://github.com/huggingface/transformers/issues/9106. Other old huggingtweets models still work but not the new ones, not sure what the problem is.", "> OSError: Can't load config for 'model_path'. Make sure that:\r\n> \r\n> - 'model_path' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\nHye I'm facing the same. Did you solve that ? ", "I have the same error message. It also says that it can't find url\"/resolve/main/config.json\". I saved my model like they said but only have a folder \"results\" containing \"pytorch_model.bin\" and \"training_args.bin\".\r\n\r\nEdit: I tried to also save the tokenizer (despite only having fine-tuned). This gave me a tokenizer_config.json which still isn't enough.\r\n\r\nHow do I get a config.json in my directory? I'm using a custom BERT modeled after BertForTokenClassification (https://huggingface.co/transformers/_modules/transformers/models/bert/modeling_bert.html#BertForTokenClassification) which doesn't specify a config attribute.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,602
1,620
1,620
NONE
null
I was deploying a trained model on AWS EC2 instance (t3a.xlarge) using a dockerized image and Flask. The model was trained using [fast-bert](https://github.com/kaushaltrivedi) that implements transformers as a dependency. When I passed a sentence on the rendered page, I received `"In get_config_dict raise EnvironmentError OSError"` and ``` OSError: Can't load config for 'model/final_model'. Make sure that: 'path/to/final_model' is a correct model identifier listed on 'https://huggingface.co/models' or 'path/to/final_model' is the correct path to a directory containing a config.json file ``` As suggested in certain threads, I re-installed the image with the latest transformers==3.3.1 release. However, I am unable to figure out the issue. Kindly help. Similar to #6267 #5803 #7412 ![config_Error](https://user-images.githubusercontent.com/42617598/95229583-5e5b1280-081e-11eb-9a8b-e47a0f9131ab.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7617/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7617/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7616
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7616/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7616/comments
https://api.github.com/repos/huggingface/transformers/issues/7616/events
https://github.com/huggingface/transformers/pull/7616
715,820,702
MDExOlB1bGxSZXF1ZXN0NDk4NjY1NTQy
7,616
Fix wrong reference name/filename in docstring of `SquadProcessor`
{ "login": "phiyodr", "id": 33572125, "node_id": "MDQ6VXNlcjMzNTcyMTI1", "avatar_url": "https://avatars.githubusercontent.com/u/33572125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phiyodr", "html_url": "https://github.com/phiyodr", "followers_url": "https://api.github.com/users/phiyodr/followers", "following_url": "https://api.github.com/users/phiyodr/following{/other_user}", "gists_url": "https://api.github.com/users/phiyodr/gists{/gist_id}", "starred_url": "https://api.github.com/users/phiyodr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phiyodr/subscriptions", "organizations_url": "https://api.github.com/users/phiyodr/orgs", "repos_url": "https://api.github.com/users/phiyodr/repos", "events_url": "https://api.github.com/users/phiyodr/events{/privacy}", "received_events_url": "https://api.github.com/users/phiyodr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #7613 Replace wrong filenames in docstring: `train-v1.1.json`/`train-v2.0.json` -> `dev-v1.1.json`/`dev-v2.0.json` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7616/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7616/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7616", "html_url": "https://github.com/huggingface/transformers/pull/7616", "diff_url": "https://github.com/huggingface/transformers/pull/7616.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7616.patch", "merged_at": 1602021750000 }
https://api.github.com/repos/huggingface/transformers/issues/7615
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7615/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7615/comments
https://api.github.com/repos/huggingface/transformers/issues/7615/events
https://github.com/huggingface/transformers/issues/7615
715,760,355
MDU6SXNzdWU3MTU3NjAzNTU=
7,615
Feature Request: Support training/evaluation on Squad-format (json) files in SquadDataset for quick Squad fine-tuning
{ "login": "raperry", "id": 37008135, "node_id": "MDQ6VXNlcjM3MDA4MTM1", "avatar_url": "https://avatars.githubusercontent.com/u/37008135?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raperry", "html_url": "https://github.com/raperry", "followers_url": "https://api.github.com/users/raperry/followers", "following_url": "https://api.github.com/users/raperry/following{/other_user}", "gists_url": "https://api.github.com/users/raperry/gists{/gist_id}", "starred_url": "https://api.github.com/users/raperry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raperry/subscriptions", "organizations_url": "https://api.github.com/users/raperry/orgs", "repos_url": "https://api.github.com/users/raperry/repos", "events_url": "https://api.github.com/users/raperry/events{/privacy}", "received_events_url": "https://api.github.com/users/raperry/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,601
1,608
1,608
NONE
null
I am currently working on a project using [run_squad_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad_trainer.py) to quickly fine-tune on new Squad-format (v2.0 json) files. However, the current SquadDataset class only allows training/evaluating on the original Squad jsons (train-v2.0.json, dev-v2.0.json). I have used a quick work around by using softlinks, to the actual training/evaluation files, but feel this is a little contrived. I believe if the arguments `train_file` and `predict_file` are added in [SquadArguments](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/src/transformers/data/datasets/squad.py#L36) and also line 152 in [SquadDataset](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/src/transformers/data/datasets/squad.py#L152) is changed to `self.examples = self.processor.get_dev_examples(args.data_dir, filename=args.predict_file)` and line 154 in [SquadDataset](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/src/transformers/data/datasets/squad.py#L154) `self.examples = self.processor.get_train_examples(args.data_dir, filename=arg.train_file)` that may do the trick. At least this approach works in [run_squad.py](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/examples/question-answering/run_squad.py#L444). Thanks for your great work!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7615/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7614
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7614/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7614/comments
https://api.github.com/repos/huggingface/transformers/issues/7614/events
https://github.com/huggingface/transformers/issues/7614
715,758,639
MDU6SXNzdWU3MTU3NTg2Mzk=
7,614
Feature Request: Support training and evaluating on Squad-format (json) files in SquadDataset for easy Squad fine-tuning
{ "login": "raperry", "id": 37008135, "node_id": "MDQ6VXNlcjM3MDA4MTM1", "avatar_url": "https://avatars.githubusercontent.com/u/37008135?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raperry", "html_url": "https://github.com/raperry", "followers_url": "https://api.github.com/users/raperry/followers", "following_url": "https://api.github.com/users/raperry/following{/other_user}", "gists_url": "https://api.github.com/users/raperry/gists{/gist_id}", "starred_url": "https://api.github.com/users/raperry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raperry/subscriptions", "organizations_url": "https://api.github.com/users/raperry/orgs", "repos_url": "https://api.github.com/users/raperry/repos", "events_url": "https://api.github.com/users/raperry/events{/privacy}", "received_events_url": "https://api.github.com/users/raperry/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sorry duplicated my request." ]
1,601
1,601
1,601
NONE
null
I am currently working on a project using [run_squad_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad_trainer.py) to quickly fine-tune on new Squad-format (v2.0 json) files. However, the current SquadDataset class only allows training/evaluating on the original Squad jsons (train-v2.0.json, dev-v2.0.json). I have used a quick work around by using softlinks, to the actual training/evaluation files, but feel this is a little contrived. I believe if the arguments `train_file` and `predict_file` are added in [SquadArguments](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/src/transformers/data/datasets/squad.py#L36) and also line 152 in [SquadDataset](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/src/transformers/data/datasets/squad.py#L152) is changed to `self.examples = self.processor.get_dev_examples(args.data_dir, filename=args.predict_file)` and line 154 in [SquadDataset](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/src/transformers/data/datasets/squad.py#L154) `self.examples = self.processor.get_train_examples(args.data_dir, filename=arg.train_file)` that may do the trick. At least this approach works in [run_squad.py](https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/examples/question-answering/run_squad.py#L444). Thanks for your great work!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7614/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7614/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7613
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7613/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7613/comments
https://api.github.com/repos/huggingface/transformers/issues/7613/events
https://github.com/huggingface/transformers/issues/7613
715,715,863
MDU6SXNzdWU3MTU3MTU4NjM=
7,613
SquadProcessor: Wrong reference name/filename in docstring
{ "login": "phiyodr", "id": 33572125, "node_id": "MDQ6VXNlcjMzNTcyMTI1", "avatar_url": "https://avatars.githubusercontent.com/u/33572125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phiyodr", "html_url": "https://github.com/phiyodr", "followers_url": "https://api.github.com/users/phiyodr/followers", "following_url": "https://api.github.com/users/phiyodr/following{/other_user}", "gists_url": "https://api.github.com/users/phiyodr/gists{/gist_id}", "starred_url": "https://api.github.com/users/phiyodr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phiyodr/subscriptions", "organizations_url": "https://api.github.com/users/phiyodr/orgs", "repos_url": "https://api.github.com/users/phiyodr/repos", "events_url": "https://api.github.com/users/phiyodr/events{/privacy}", "received_events_url": "https://api.github.com/users/phiyodr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed! Do you want to open a PR fixing the doc?", "Yes, I can do that. :)" ]
1,601
1,602
1,602
CONTRIBUTOR
null
As the docstring of the function `get_train_examples()` refers to `train-v1.1.json`/`train-v2.0.json`, I guess `get_dev_examples()` should refer to `dev-v1.1.json`/`dev-v2.0.json` (but refers to `train-v1.1.json`/`train-v2.0.json`): https://github.com/huggingface/transformers/blob/aa6c3c14b4ff8fbd5d40b100f4aae71fb359d6ae/src/transformers/data/processors/squad.py#L610-L617
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7613/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7612
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7612/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7612/comments
https://api.github.com/repos/huggingface/transformers/issues/7612/events
https://github.com/huggingface/transformers/pull/7612
715,621,522
MDExOlB1bGxSZXF1ZXN0NDk4NDk4MTU1
7,612
updating modelcard with training dataset information.
{ "login": "cedspam", "id": 7693193, "node_id": "MDQ6VXNlcjc2OTMxOTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7693193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cedspam", "html_url": "https://github.com/cedspam", "followers_url": "https://api.github.com/users/cedspam/followers", "following_url": "https://api.github.com/users/cedspam/following{/other_user}", "gists_url": "https://api.github.com/users/cedspam/gists{/gist_id}", "starred_url": "https://api.github.com/users/cedspam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cedspam/subscriptions", "organizations_url": "https://api.github.com/users/cedspam/orgs", "repos_url": "https://api.github.com/users/cedspam/repos", "events_url": "https://api.github.com/users/cedspam/events{/privacy}", "received_events_url": "https://api.github.com/users/cedspam/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,601
1,601
1,601
CONTRIBUTOR
null
updating modelcard with training dataset information.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7612/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7612", "html_url": "https://github.com/huggingface/transformers/pull/7612", "diff_url": "https://github.com/huggingface/transformers/pull/7612.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7612.patch", "merged_at": 1601988416000 }
https://api.github.com/repos/huggingface/transformers/issues/7611
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7611/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7611/comments
https://api.github.com/repos/huggingface/transformers/issues/7611/events
https://github.com/huggingface/transformers/pull/7611
715,583,053
MDExOlB1bGxSZXF1ZXN0NDk4NDY2NDAz
7,611
typo fix
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "organizations_url": "https://api.github.com/users/agemagician/orgs", "repos_url": "https://api.github.com/users/agemagician/repos", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "received_events_url": "https://api.github.com/users/agemagician/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,601
1,601
1,601
CONTRIBUTOR
null
It should be T5-3B not T5-3M. Fixes # (issue) ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Model Cards: @julien-c T5: @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7611/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7611", "html_url": "https://github.com/huggingface/transformers/pull/7611", "diff_url": "https://github.com/huggingface/transformers/pull/7611.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7611.patch", "merged_at": 1601991172000 }
https://api.github.com/repos/huggingface/transformers/issues/7610
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7610/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7610/comments
https://api.github.com/repos/huggingface/transformers/issues/7610/events
https://github.com/huggingface/transformers/pull/7610
715,561,668
MDExOlB1bGxSZXF1ZXN0NDk4NDQ4MjM2
7,610
Fix tokenizer UnboundLocalError when padding is set to PaddingStrategy.MAX_LENGTH
{ "login": "GabrielePicco", "id": 12031208, "node_id": "MDQ6VXNlcjEyMDMxMjA4", "avatar_url": "https://avatars.githubusercontent.com/u/12031208?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GabrielePicco", "html_url": "https://github.com/GabrielePicco", "followers_url": "https://api.github.com/users/GabrielePicco/followers", "following_url": "https://api.github.com/users/GabrielePicco/following{/other_user}", "gists_url": "https://api.github.com/users/GabrielePicco/gists{/gist_id}", "starred_url": "https://api.github.com/users/GabrielePicco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GabrielePicco/subscriptions", "organizations_url": "https://api.github.com/users/GabrielePicco/orgs", "repos_url": "https://api.github.com/users/GabrielePicco/repos", "events_url": "https://api.github.com/users/GabrielePicco/events{/privacy}", "received_events_url": "https://api.github.com/users/GabrielePicco/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks!" ]
1,601
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #7609 ## Who can review? tokenizers: @mfuntowicz
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7610/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7610/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7610", "html_url": "https://github.com/huggingface/transformers/pull/7610", "diff_url": "https://github.com/huggingface/transformers/pull/7610.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7610.patch", "merged_at": 1602022560000 }
https://api.github.com/repos/huggingface/transformers/issues/7609
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7609/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7609/comments
https://api.github.com/repos/huggingface/transformers/issues/7609/events
https://github.com/huggingface/transformers/issues/7609
715,557,797
MDU6SXNzdWU3MTU1NTc3OTc=
7,609
Tokenizer: UnboundLocalError with PaddingStrategy MAX_LENGTH
{ "login": "GabrielePicco", "id": 12031208, "node_id": "MDQ6VXNlcjEyMDMxMjA4", "avatar_url": "https://avatars.githubusercontent.com/u/12031208?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GabrielePicco", "html_url": "https://github.com/GabrielePicco", "followers_url": "https://api.github.com/users/GabrielePicco/followers", "following_url": "https://api.github.com/users/GabrielePicco/following{/other_user}", "gists_url": "https://api.github.com/users/GabrielePicco/gists{/gist_id}", "starred_url": "https://api.github.com/users/GabrielePicco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GabrielePicco/subscriptions", "organizations_url": "https://api.github.com/users/GabrielePicco/orgs", "repos_url": "https://api.github.com/users/GabrielePicco/repos", "events_url": "https://api.github.com/users/GabrielePicco/events{/privacy}", "received_events_url": "https://api.github.com/users/GabrielePicco/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,602
1,602
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help tokenizers: @mfuntowicz ## Information Model I am using: <transformers.tokenization_bert.BertTokenizer> The problem arises when using: * the official example scripts: using the `encode_plus` The tasks I am working on is: * Tokenizing ## To reproduce Steps to reproduce the behavior: 1. tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") 2. tokenizer.encode_plus("hello word", max_length=128, padding=PaddingStrategy.MAX_LENGTH) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> `UnboundLocalError: local variable 'padding_strategy' referenced before assignment` ## Expected behavior Return Tokenizer output
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7609/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7609/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7608
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7608/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7608/comments
https://api.github.com/repos/huggingface/transformers/issues/7608/events
https://github.com/huggingface/transformers/issues/7608
715,545,211
MDU6SXNzdWU3MTU1NDUyMTE=
7,608
Ability to pre-train BART model
{ "login": "Hazoom", "id": 13545154, "node_id": "MDQ6VXNlcjEzNTQ1MTU0", "avatar_url": "https://avatars.githubusercontent.com/u/13545154?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hazoom", "html_url": "https://github.com/Hazoom", "followers_url": "https://api.github.com/users/Hazoom/followers", "following_url": "https://api.github.com/users/Hazoom/following{/other_user}", "gists_url": "https://api.github.com/users/Hazoom/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hazoom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hazoom/subscriptions", "organizations_url": "https://api.github.com/users/Hazoom/orgs", "repos_url": "https://api.github.com/users/Hazoom/repos", "events_url": "https://api.github.com/users/Hazoom/events{/privacy}", "received_events_url": "https://api.github.com/users/Hazoom/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I was wondering if there was any follow up on this topic as I'd also be interested in continued pertaining on bart-base." ]
1,601
1,694
1,608
NONE
null
# 🚀 Feature request Ability to pre-train BART model, same like there is an ability to pre-train BERT and other models. ## Motivation I'm using a pre-trained BART model for sequence-to-sequence problem and trained it using my own data, using the examples here: https://github.com/huggingface/transformers/tree/master/examples/seq2seq I was wondering if there is a chance to add an ability to continue the pre-training of the already pre-trained `facebook/bart-base` and `facebook/bart-large` models, with my own unsupervised data, in order to improve the results. @sshleifer Can you please help? thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7608/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7608/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7607
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7607/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7607/comments
https://api.github.com/repos/huggingface/transformers/issues/7607/events
https://github.com/huggingface/transformers/pull/7607
715,507,014
MDExOlB1bGxSZXF1ZXN0NDk4NDAyMzQx
7,607
Create README.md (LEGAL-BERT Model card)
{ "login": "iliaschalkidis", "id": 1626984, "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iliaschalkidis", "html_url": "https://github.com/iliaschalkidis", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "That's really cool, thanks for sharing @iliaschalkidis ", "Thanks @julien-c for your nice comments and for building and improving such a great library. Is there any chance, that we could place all 5 LEGAL-BERT variants in a sub-folder, i.e., `/legal-bert`, inside the account folder `/nlpaueb`? Kind of OCD though 🤓 \r\n", "I'm not sure what you mean :)\r\n\r\nDo you want to e.g. rename `bert-base-uncased-contracts` to `legal-bert-base-uncased-contracts`? Or do you want `nlpaueb/legal-bert/bert-base-uncased-contracts`? We don't really want to do the latter IMO (have increased levels of nesting) because:\r\n- I'm afraid it might get confusing for users of the models,\r\n- some of the tooling we are currently building is expecting a org_name/model_name layout.\r\n\r\nWhat do you think?", "I was referring to the second scenario, but I totally understand it will make things more complicated on your side. Thanks again!\r\n\r\n" ]
1,601
1,602
1,601
NONE
null
Model description for all LEGAL-BERT models, published as part of "LEGAL-BERT: The Muppets straight out of Law School". Chalkidis et al., 2018, In Findings of EMNLP 2020 # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7607/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7607", "html_url": "https://github.com/huggingface/transformers/pull/7607", "diff_url": "https://github.com/huggingface/transformers/pull/7607.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7607.patch", "merged_at": 1601988378000 }
https://api.github.com/repos/huggingface/transformers/issues/7606
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7606/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7606/comments
https://api.github.com/repos/huggingface/transformers/issues/7606/events
https://github.com/huggingface/transformers/pull/7606
715,482,355
MDExOlB1bGxSZXF1ZXN0NDk4MzgyMjg3
7,606
Add ProtT5-XL-BFD model card
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "organizations_url": "https://api.github.com/users/agemagician/orgs", "repos_url": "https://api.github.com/users/agemagician/repos", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "received_events_url": "https://api.github.com/users/agemagician/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,601
1,601
1,601
CONTRIBUTOR
null
Fixes # (issue) Create a new card for our ProtT5-XL-BFD model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Model Cards: @julien-c T5: @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7606/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7606/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7606", "html_url": "https://github.com/huggingface/transformers/pull/7606", "diff_url": "https://github.com/huggingface/transformers/pull/7606.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7606.patch", "merged_at": 1601979562000 }
https://api.github.com/repos/huggingface/transformers/issues/7605
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7605/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7605/comments
https://api.github.com/repos/huggingface/transformers/issues/7605/events
https://github.com/huggingface/transformers/pull/7605
715,477,102
MDExOlB1bGxSZXF1ZXN0NDk4Mzc3OTUw
7,605
TensorFlow training/inference optimization
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> If that's all it takes, that's fantastic! Did you manage to obtain the performance improvements that were initially mentioned thanks to this?\r\n\r\nOn my machine with my GPU yes.\r\n\r\n>Also I'm realizing now that we don't have integration testing for our TensorFlow models, and this seems like a situation where having some would be needed. Could we work on adding these tests for the models modified here at first, and then add it to the rest of the models?\r\n\r\nSure! It is a good idea!\r\n\r\n> I can help you work on it if you're lacking time!\r\n\r\nI would appreciate if you have time yes 😃 ", "Okay, will take a look at doing the integrations tests sometimes tonight. Will let you know!", "@jplu \r\n\r\nFor learning purpose, I am wondering which operations was done on CPU instead of GPU. I saw you changed `Dense` to `EinsumDense` in several places, and remove several operations about shape changing. Is shape changing done on CPU and `EinsumDense` could avoid this? Could you give me some information about this, so I can read and learn it? Thanks.\r\n", "@chiapas \r\n\r\nIf you take a look at #6771 is it quite well detailed. The issue was coming from transpose+matmul that was done on CPU. einsumDense allows you to do all these computation directly in the layer but at the cost of changing the shapes of the original layers, that why we have modified the way we load the TF models.\r\n\r\nTo do this PR I basically took example on the original BERT implementation right [here](https://github.com/tensorflow/models/blob/master/official/nlp/transformer/attention_layer.py).\r\n\r\n", "Thanks a lot @LysandreJik !!\r\n\r\nAs I'm currently working on from scratch LM training for TF models, I don't have much time to really focus on this.", "> transpose+matmul\r\n\r\n@jplu Thanks. I am superised by this `transpose+matmul that was done on CPU`.", "> \r\n> \r\n> Thanks a lot @LysandreJik !!\r\n> \r\n> As I'm currently working on from scratch LM training for TF models, I don't have much time to really focus on this.\r\n\r\n@jplu You also works on LM training for TF models? I plan to go back to a pending PR #6955 I created once the `test_trainer_tf.py` is done. Do PR #6955 and your work on TF models LM training overlap? Currently that PR is still empty though.", "@chiapas This is exactly what I'm doing, and the models needs some rework that's why I'm mostly focus on BERT to have at least one model working.\r\n\r\nI just done yesterday the data pipeline with random masking generation.", "> @chiapas This is exactly what I'm doing, and the models needs some rework that's why I'm mostly focus on BERT to have at least one model working.\r\n> \r\n> I just done yesterday the data pipeline with random masking generation.\r\n\r\nAh, ok. I guess my PR was pending too long and it is my bad not to communicate with you first. I planed to do this while I finished a notebook on Kaggle [Masked, My Dear Watson - MLM with TPU](https://www.kaggle.com/yihdarshieh/masked-my-dear-watson-mlm-with-tpu), which also works on MLM.\r\n\r\nSince you already have more progresses (and also you are HF member), it is better for you to continue. However, if there is something I can contribute for this TF LM task, I would love to do it.\r\n\r\n", "> Since you already have more progresses (and also you are HF member), it is better for you to continue. However, if there is something I can contribute for this TF LM task, I would love to do it.\r\n\r\nThanks! I will let you know.", "That's awesome! I will see what results the TF benchmark scripts give before/after this PR.\r\n\r\nStrongly agree with @LysandreJik that we should add integration tests before merging this PR.", "I ran the benchmarks: `python examples/benchmarking/run_benchmark_tf.py --models bert-base-cased --env_print` in the following environment:\r\n\r\n```\r\n- transformers_version: 3.3.1 \r\n- framework: TensorFlow \r\n- eager_mode: False \r\n- use_xla: False \r\n- framework_version: 2.3.0 \r\n- python_version: 3.6.10 \r\n- system: Linux \r\n- cpu: x86_64 \r\n- architecture: 64bit \r\n- date: 2020-10-06 \r\n- time: 19:06:48.378935 \r\n- fp16: False \r\n- use_multiprocessing: True \r\n- only_pretrain_model: False \r\n- cpu_ram_mb: 32088 \r\n- use_gpu: True \r\n- num_gpus: 1 \r\n- gpu: TITAN RTX \r\n- gpu_ram_mb: 24217 \r\n- gpu_power_watts: 280.0 \r\n- gpu_performance_state: 8 \r\n- use_tpu: False \r\n```\r\n\r\n\r\nCurrently, on master:\r\n\r\n``` \r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n bert-base-cased 8 8 0.085 \r\n bert-base-cased 8 32 0.166 \r\n bert-base-cased 8 128 0.513 \r\n bert-base-cased 8 512 2.629 \r\n-------------------------------------------------------------------------------- \r\n```\r\n\r\nIn this `tf-optim` branch, the results are:\r\n\r\n```\r\n==================== INFERENCE - SPEED - RESULT ====================\r\n--------------------------------------------------------------------------------\r\n Model Name Batch Size Seq Length Time in s \r\n--------------------------------------------------------------------------------\r\n bert-base-cased 8 8 0.088 \r\n bert-base-cased 8 32 0.176 \r\n bert-base-cased 8 128 0.531 \r\n bert-base-cased 8 512 3.028 \r\n--------------------------------------------------------------------------------\r\n```\r\n\r\n=> So the speed results are more or less identical with the way the benchmarks are used.\r\n\r\nI don't compile the model with Keras, but just add the \"@tf.function\" decorator to the function to transform the function into graph mode. So not sure what to think of that.... => @jplu - colud you maybe check the benchmark script and see if you can get a speed-up there? Or if the benchmark script is wrong? \r\n\r\n```\r\npython examples/benchmarking/run_benchmark_tf.py --models bert-base-cased --env_print\r\n```", "The benchmark script is ok, but to see the difference you have to create a saved_model and run the model in TF Serving. Your benchmark don't take into account all the optimization TF serving does for inference.\r\n\r\nWe should update the benchmark script to include:\r\n\r\n- Saved model creation\r\n- run a the saved model with the TF Serving tool\r\n- adapt the the benchmark to include gRPC calls to use the model from TF Serving.", "Will be integrated into the PR #7753" ]
1,601
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? This PR fixes a performance issue where some operation was done on CPU instead of GPU and would result to put the GPU in idle mode. This optimization is feasible thanks to the recent update we made on the way we load the TF weights. @patrickvonplaten I have done few changes in the `TFLongformer` model but I'm sure it can be further optimized the same way (see `TFLongformerSelfAttention`) but as I don't know much on how works this model, can you take a look if the same optimization can be applied? Fixes #6771
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7605/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7605", "html_url": "https://github.com/huggingface/transformers/pull/7605", "diff_url": "https://github.com/huggingface/transformers/pull/7605.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7605.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7604
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7604/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7604/comments
https://api.github.com/repos/huggingface/transformers/issues/7604/events
https://github.com/huggingface/transformers/issues/7604
715,424,844
MDU6SXNzdWU3MTU0MjQ4NDQ=
7,604
way to make inference Zero Shot pipeline faster?
{ "login": "acul3", "id": 56231298, "node_id": "MDQ6VXNlcjU2MjMxMjk4", "avatar_url": "https://avatars.githubusercontent.com/u/56231298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/acul3", "html_url": "https://github.com/acul3", "followers_url": "https://api.github.com/users/acul3/followers", "following_url": "https://api.github.com/users/acul3/following{/other_user}", "gists_url": "https://api.github.com/users/acul3/gists{/gist_id}", "starred_url": "https://api.github.com/users/acul3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/acul3/subscriptions", "organizations_url": "https://api.github.com/users/acul3/orgs", "repos_url": "https://api.github.com/users/acul3/repos", "events_url": "https://api.github.com/users/acul3/events{/privacy}", "received_events_url": "https://api.github.com/users/acul3/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Closing this and moving the conversation (w/ my answer) to [the forums](https://discuss.huggingface.co/t/way-to-make-inference-zero-shot-pipeline-faster/1384/2?u=joeddav)." ]
1,601
1,601
1,601
CONTRIBUTOR
null
Hi Can you guys give me tips how to make Zero Shot pipeline inference faster? My current approach right now is reducing the model size/parameter (trying to train "base model" instead of "large model) Is there another approach? CCing @joeddav
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7604/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7603
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7603/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7603/comments
https://api.github.com/repos/huggingface/transformers/issues/7603/events
https://github.com/huggingface/transformers/pull/7603
715,387,862
MDExOlB1bGxSZXF1ZXN0NDk4MzA1NDQz
7,603
Added model cards for Tagalog BERT models
{ "login": "jcblaisecruz02", "id": 24757547, "node_id": "MDQ6VXNlcjI0NzU3NTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/24757547?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jcblaisecruz02", "html_url": "https://github.com/jcblaisecruz02", "followers_url": "https://api.github.com/users/jcblaisecruz02/followers", "following_url": "https://api.github.com/users/jcblaisecruz02/following{/other_user}", "gists_url": "https://api.github.com/users/jcblaisecruz02/gists{/gist_id}", "starred_url": "https://api.github.com/users/jcblaisecruz02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcblaisecruz02/subscriptions", "organizations_url": "https://api.github.com/users/jcblaisecruz02/orgs", "repos_url": "https://api.github.com/users/jcblaisecruz02/repos", "events_url": "https://api.github.com/users/jcblaisecruz02/events{/privacy}", "received_events_url": "https://api.github.com/users/jcblaisecruz02/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Thanks!" ]
1,601
1,602
1,602
NONE
null
# What does this PR do? Adds model cards for five Tagalog BERT models: * jcblaise/bert-tagalog-base-cased * jcblaise/bert-tagalog-base-uncased * jcblaise/bert-tagalog-base-cased-WWM * jcblaise/bert-tagalog-base-uncased-WWM * jcblaise/distilbert-tagalog-base-uncased
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7603/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7603", "html_url": "https://github.com/huggingface/transformers/pull/7603", "diff_url": "https://github.com/huggingface/transformers/pull/7603.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7603.patch", "merged_at": 1602103761000 }
https://api.github.com/repos/huggingface/transformers/issues/7602
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7602/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7602/comments
https://api.github.com/repos/huggingface/transformers/issues/7602/events
https://github.com/huggingface/transformers/issues/7602
715,361,073
MDU6SXNzdWU3MTUzNjEwNzM=
7,602
RAG : Can we fine-tune RAG with update frequency method similar to Fairseq framework?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hey @shamanez - what do you mean by \"update frequency\"? You don't need to use 8 GPUS => you can just reduce the number of gpus as you wish and keep the \"same\" batch size by increasing the `gradient_accumulation_steps` - does this make sense? ", "Let's say if the effective batch size is 32 with 8 GPUs and I want to keep the same batch size with 4 GPUs, I just need to change the _gradient_accumulation_steps_ to 2 right?\r\n\r\n[Update_Freq](https://fairseq.readthedocs.io/en/latest/command_line_tools.html#fairseq-train) is what fairseq used to keep the effective batch size same with less number of GPUs.", "Yeah exactly, in `examples/rag/finetune.sh` the default is `gpus=8` and `gradient_accumalation_steps=1`. So if you want to use less gpus while keeping the same \"effective\" batch size you should increase `gradient_accumalation_steps` accordingly", "Thanks a lot. :) " ]
1,601
1,602
1,602
CONTRIBUTOR
null
RAG fine-tune script needs 8 GPUs to train. Is there any chance that the training can be done with less number of GPUs using the update frequency?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7602/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7601
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7601/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7601/comments
https://api.github.com/repos/huggingface/transformers/issues/7601/events
https://github.com/huggingface/transformers/issues/7601
715,318,475
MDU6SXNzdWU3MTUzMTg0NzU=
7,601
Does tokenizer.from_pretrained tokenize text on CPU even a GPU is available?
{ "login": "BaoshengHeTR", "id": 60898384, "node_id": "MDQ6VXNlcjYwODk4Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/60898384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BaoshengHeTR", "html_url": "https://github.com/BaoshengHeTR", "followers_url": "https://api.github.com/users/BaoshengHeTR/followers", "following_url": "https://api.github.com/users/BaoshengHeTR/following{/other_user}", "gists_url": "https://api.github.com/users/BaoshengHeTR/gists{/gist_id}", "starred_url": "https://api.github.com/users/BaoshengHeTR/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BaoshengHeTR/subscriptions", "organizations_url": "https://api.github.com/users/BaoshengHeTR/orgs", "repos_url": "https://api.github.com/users/BaoshengHeTR/repos", "events_url": "https://api.github.com/users/BaoshengHeTR/events{/privacy}", "received_events_url": "https://api.github.com/users/BaoshengHeTR/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, indeed GPUs are not used when doing tokenization. There are no matrix operations and there's no need for heavy parallelization, so no need to rely on GPUs for this operation." ]
1,601
1,601
1,601
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details For a tokenizer from `tokenizer = AutoTokenizer.from_pretrained(model_name)`, if `tokenizer.encode_plus(text)` works on CPUs even a GPU is available. I tried to run such code on a AWS GPU machine instance, but found GPUs are totally not used. Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7601/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7600
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7600/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7600/comments
https://api.github.com/repos/huggingface/transformers/issues/7600/events
https://github.com/huggingface/transformers/issues/7600
715,280,276
MDU6SXNzdWU3MTUyODAyNzY=
7,600
TFBertMode.pre_trained('bert-base-uncased') --> OSError
{ "login": "sansanai", "id": 25274898, "node_id": "MDQ6VXNlcjI1Mjc0ODk4", "avatar_url": "https://avatars.githubusercontent.com/u/25274898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sansanai", "html_url": "https://github.com/sansanai", "followers_url": "https://api.github.com/users/sansanai/followers", "following_url": "https://api.github.com/users/sansanai/following{/other_user}", "gists_url": "https://api.github.com/users/sansanai/gists{/gist_id}", "starred_url": "https://api.github.com/users/sansanai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sansanai/subscriptions", "organizations_url": "https://api.github.com/users/sansanai/orgs", "repos_url": "https://api.github.com/users/sansanai/repos", "events_url": "https://api.github.com/users/sansanai/events{/privacy}", "received_events_url": "https://api.github.com/users/sansanai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I deleted my virtual environment and workspace, re-installed it, re-runed the above codes, and found that the above codes worked without error\r\n" ]
1,601
1,601
1,601
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-centos-7.8.2003-Core - Python version: 3.7.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: - Using distributed or parallel set-up in script?: Question : I wanted to see the pretrained bert model summary, So I opened Jupyter notebook on my computer installed Quadro RTX 5000 GPUs , and typed the following code to load pretrained bert model using TFBertModel.from_pretrained() function. After running cell, but I got Error messages... --- test codes --- from transformers import TFBertModel encoder = TFBertModel.from_pretrained('bert-base-uncased') --- end of test codes --- ---- error messages start --- OSError Traceback (most recent call last) ~/anaconda3/envs/tf23/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 354 resume_download=resume_download, --> 355 local_files_only=local_files_only, 356 ) ~/anaconda3/envs/tf23/lib/python3.7/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only) 729 # File, but it doesn't exist. --> 730 raise EnvironmentError("file {} not found".format(url_or_filename)) 731 else: OSError: file bert-base-uncased/config.json not found During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-4-c74bfe775797> in <module> 1 from transformers import TFBertModel 2 ----> 3 encoder = TFBertModel.from_pretrained('bert-base-uncased') ~/anaconda3/envs/tf23/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 543 proxies=proxies, 544 local_files_only=local_files_only, --> 545 **kwargs, 546 ) 547 else: ~/anaconda3/envs/tf23/lib/python3.7/site-packages/transformers/configuration_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 313 314 """ --> 315 config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) 316 return cls.from_dict(config_dict, **kwargs) 317 ~/anaconda3/envs/tf23/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 366 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n" 367 ) --> 368 raise EnvironmentError(msg) 369 370 except json.JSONDecodeError: OSError: Can't load config for 'bert-base-uncased'. Make sure that: - 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'bert-base-uncased' is the correct path to a directory containing a config.json file ----- end of error messages --- I also tested above codes in Colab, In Colab, the above code worked well without errors. Please, let me know how to solve this problem.. Thanks in advance
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7600/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7599
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7599/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7599/comments
https://api.github.com/repos/huggingface/transformers/issues/7599/events
https://github.com/huggingface/transformers/pull/7599
715,259,946
MDExOlB1bGxSZXF1ZXN0NDk4MTk5MzUw
7,599
Support T5 Distillation w/hidden state supervision
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,601
1,601
CONTRIBUTOR
null
Support distilling t5 for summarization and translation with hidden state supervision. cc @patil-suraj @patrickvonplaten Here are some very simple commands that work for now: ### Yes Teacher/Traditional Distillation ```bash python distillation.py --teacher t5-small --data_dir cnn_dm \ --student_decoder_layers 3 --student_encoder_layers 6 --tokenizer_name t5-small \ --learning_rate=3e-4 --freeze_encoder --no_teacher --freeze_embeds \ --do_train --train_batch_size 32 \ --do_predict \ --model_name_or_path t5-small --eval_beams 2 --eval_max_gen_length 142 \ --val_check_interval 0.25 --n_val 1000 \ --output_dir distilt5 --gpus 1 --logger_name wandb ``` ### No teacher ```bash python make_student.py t5-small t5_small_6_3 6 3 python finetune.py --model_name_or_path t5_small_6_3 --data_dir cnn_dm \ --learning_rate=3e-4 --freeze_encoder --freeze_embeds \ --do_train --train_batch_size 32 \ --do_predict \ --model_name_or_path t5_small_6_3 --eval_beams 2 --eval_max_gen_length 142 \ --val_check_interval 0.25 --n_val 1000 \ --output_dir distilt5 --gpus 1 --logger_name wandb ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7599/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7599/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7599", "html_url": "https://github.com/huggingface/transformers/pull/7599", "diff_url": "https://github.com/huggingface/transformers/pull/7599.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7599.patch", "merged_at": 1601947909000 }
https://api.github.com/repos/huggingface/transformers/issues/7598
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7598/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7598/comments
https://api.github.com/repos/huggingface/transformers/issues/7598/events
https://github.com/huggingface/transformers/pull/7598
715,197,996
MDExOlB1bGxSZXF1ZXN0NDk4MTQ3NzUy
7,598
Docker GPU Images: Add NVIDIA/apex to the cuda images with pytorch
{ "login": "AdrienDS", "id": 1977281, "node_id": "MDQ6VXNlcjE5NzcyODE=", "avatar_url": "https://avatars.githubusercontent.com/u/1977281?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AdrienDS", "html_url": "https://github.com/AdrienDS", "followers_url": "https://api.github.com/users/AdrienDS/followers", "following_url": "https://api.github.com/users/AdrienDS/following{/other_user}", "gists_url": "https://api.github.com/users/AdrienDS/gists{/gist_id}", "starred_url": "https://api.github.com/users/AdrienDS/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AdrienDS/subscriptions", "organizations_url": "https://api.github.com/users/AdrienDS/orgs", "repos_url": "https://api.github.com/users/AdrienDS/repos", "events_url": "https://api.github.com/users/AdrienDS/events{/privacy}", "received_events_url": "https://api.github.com/users/AdrienDS/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @AdrienDS, \r\n\r\nThanks for suggesting the changes.\r\n\r\nDid you try building the image locally? I totally understand the motivation behind the use of the `-devel` layer parent, but I have concern regarding the final image size. Would it be possible for you to include the resulting size for th `devel` based image?\r\n\r\nOtherwise look good for me!\r\n", "Hi @mfuntowicz It does increase the size, from 4.46GB (v3.3.1) to 6.53GB for `transformers-pytorch-gpu`. \r\n\r\nIf it's too large, could we create a separate image ? (like: `transformers-pytorch-gpu-apex`)", "Ok, that shouldn't hurt too much, let's go!\r\n\r\nThanks for the contribution 👍 " ]
1,601
1,601
1,601
CONTRIBUTOR
null
# What does this PR do? - Use cuda:10.2 image instead of 10.1 (to address version mismatch warning with pytorch) - Use `devel` version that is built on the `runtime` and includes headers and development tools (it was otherwise failing to build apex). For a description of the different flavors, see: https://hub.docker.com/r/nvidia/cuda -> Overview of Images - Download and build `apex` for pytorch. https://github.com/NVIDIA/apex#quick-start ## Docs - https://github.com/NVIDIA/apex - https://nvidia.github.io/apex/ - https://hub.docker.com/r/nvidia/cuda ## Who can review? - @mfuntowicz co-authored the Dockerfiles in 71c87119
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7598/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7598/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7598", "html_url": "https://github.com/huggingface/transformers/pull/7598", "diff_url": "https://github.com/huggingface/transformers/pull/7598.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7598.patch", "merged_at": 1601990612000 }
https://api.github.com/repos/huggingface/transformers/issues/7597
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7597/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7597/comments
https://api.github.com/repos/huggingface/transformers/issues/7597/events
https://github.com/huggingface/transformers/pull/7597
715,174,856
MDExOlB1bGxSZXF1ZXN0NDk4MTI4NTMw
7,597
Enhance TFTrainer.save_model()
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Closing in favor of #7619 " ]
1,601
1,651
1,602
COLLABORATOR
null
# What does this PR do? Currently, `TFTrainer.save_model()` raises errors if the model is not `TFPreTrainedModel` . However `Trainer` works fine with `torch.nn.modules.Module`. This is a step to make TFTrainer work with usual `tf.keras.models.Model` models. The idea (from @sgugger) is that a user is building their own models that work like ours (e.g., return the loss as the first output) and can train them with Trainer. Furthermore, a SavedModel is also saved using `tf.saved_model.save()`. I tried to avoid duplicated code (check and create output directory before saving), and therefore there is a new method `save_tf_model()` in `modeling_tf_utils`, which is used in `trainer_tf.py`. For @jplu and @sgugger .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7597/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7597", "html_url": "https://github.com/huggingface/transformers/pull/7597", "diff_url": "https://github.com/huggingface/transformers/pull/7597.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7597.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7596
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7596/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7596/comments
https://api.github.com/repos/huggingface/transformers/issues/7596/events
https://github.com/huggingface/transformers/pull/7596
715,151,415
MDExOlB1bGxSZXF1ZXN0NDk4MTA4ODAz
7,596
Trainer callbacks
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Finally! Can't wait for this PR to be merged.\r\nI've briefly looked at the code and from my understanding it should support this case, but correct me if I'm wrong:\r\n\r\n_When saving checkpoints (including model weights as well as scheduler and optimizer states), I will be able to attach to this process and store the checkpoint in some external repository (i.e GCS / W&B artifact)_,\r\n\r\nright?", "Yes, you will be able to inject custom behavior to the saved checkpoint with the `on_save` event." ]
1,601
1,602
1,602
COLLABORATOR
null
# What does this PR do? This PR does two things: clean up a bit the files supporting `Trainer` and the `Trainer` class, and add callbacks to `Trainer`. ### Callbacks This PR introduces a new class called `TrainerCallback` that can access the current state of the training loop and make some decisions (shown in the `TrainerControl` object). This allows us to isolate the pieces of code that do log-reporting on the various ML platforms or report progress in another file and clean up the code of the main `train` method of the `Trainer`. This way, any new platform we want to integrate with for log-reporting or new behavior (like early stopping) can be implemented in a Callback while `Trainer` focuses on the main aspects of actual training, with or without mixed precision, on one or several GPUs/TPUs. As an example, integrations with TensorBoard, Wandb and ComeML are moved to the `integrations` module in clean callbacks, while the control flow of logs/saves/evaluations as well as progress reporting are moved to the `trainer_callback` file. Most of the behavior stays the same as this PR essentially moves code around, but there are a few API changes: - deprecating the `tb_writer` argument in `Trainer` (with full backward compatibility), people should now use the `TensorBoardCallback`. - a new `callbacks` argument in the `Trainer` init and new `add_callback`, `pop_callback` and `remove_callback` for the `Trainer`. For all of those, you can either pass an instance of a callback or a callback class. - Cleaned up the progress bars a bit with only one main progress bar over all the steps we will do for training and evaluation bars that disappear after being done ### Progress bars Here is the new progress bar behavior in console mode (checked in single and multi GPU envs, to make sure only one progress bar is displayed/logs are only printed once): ![](https://i.ibb.co/Fq60bFS/console-progress.png) and in a jupyter notebook: ![](https://i.ibb.co/kSzs1fC/notebook-progress.png) ### General cleanup Not directly related to this PR, but related to the general cleanup of `Trainer`, I moved a bit of stuff around: moved the utils at the start of `Trainer` to a new `trainer_utils_pt`. This way `trainer_utils` can be about the general training utils that work on both PyTorch and TensorFlow, and I moved the ones specific to PyTorch to `trainer_utils_pt`. Also in `Trainer`, the code for logs, save and evaluation ended being duplicated between the end of a training step and the end of an epoch, so I put it in its private method to improve readability.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7596/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7596/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7596", "html_url": "https://github.com/huggingface/transformers/pull/7596", "diff_url": "https://github.com/huggingface/transformers/pull/7596.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7596.patch", "merged_at": 1602082222000 }
https://api.github.com/repos/huggingface/transformers/issues/7595
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7595/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7595/comments
https://api.github.com/repos/huggingface/transformers/issues/7595/events
https://github.com/huggingface/transformers/pull/7595
715,098,028
MDExOlB1bGxSZXF1ZXN0NDk4MDY0MDAz
7,595
change return dicitonary for DataCollatorForNextSentencePrediction from masked_lm_labels to labels
{ "login": "gmihaila", "id": 22454783, "node_id": "MDQ6VXNlcjIyNDU0Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gmihaila", "html_url": "https://github.com/gmihaila", "followers_url": "https://api.github.com/users/gmihaila/followers", "following_url": "https://api.github.com/users/gmihaila/following{/other_user}", "gists_url": "https://api.github.com/users/gmihaila/gists{/gist_id}", "starred_url": "https://api.github.com/users/gmihaila/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gmihaila/subscriptions", "organizations_url": "https://api.github.com/users/gmihaila/orgs", "repos_url": "https://api.github.com/users/gmihaila/repos", "events_url": "https://api.github.com/users/gmihaila/events{/privacy}", "received_events_url": "https://api.github.com/users/gmihaila/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,602
1,601
CONTRIBUTOR
null
# What does this PR do? The `masked_lm_labels` argument from *DataCollatorForNextSentencePrediction* is deprecated and will be removed in a future version, use `labels` instead. I changed the dictionary key from `masked_lm_labels ` to `labels`. It will avoid any future errors when `masked_lm_labels` won't be used anymore. Not a lot of people use the *DataCollatorForNextSentencePrediction* and I think this will get overlooked in the future if not fixed. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> It fixes warning that appear when using `trainer` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. **Not the case** - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). **Not needed.** - [x] Did you write any new necessary tests? **Was not needed, the change that I made is very minor.** ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7595/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7595/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7595", "html_url": "https://github.com/huggingface/transformers/pull/7595", "diff_url": "https://github.com/huggingface/transformers/pull/7595.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7595.patch", "merged_at": 1601989924000 }
https://api.github.com/repos/huggingface/transformers/issues/7594
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7594/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7594/comments
https://api.github.com/repos/huggingface/transformers/issues/7594/events
https://github.com/huggingface/transformers/issues/7594
715,086,652
MDU6SXNzdWU3MTUwODY2NTI=
7,594
RagTokenForGeneration.from_pretrained fails while running demo script
{ "login": "mthielk", "id": 56938492, "node_id": "MDQ6VXNlcjU2OTM4NDky", "avatar_url": "https://avatars.githubusercontent.com/u/56938492?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mthielk", "html_url": "https://github.com/mthielk", "followers_url": "https://api.github.com/users/mthielk/followers", "following_url": "https://api.github.com/users/mthielk/following{/other_user}", "gists_url": "https://api.github.com/users/mthielk/gists{/gist_id}", "starred_url": "https://api.github.com/users/mthielk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mthielk/subscriptions", "organizations_url": "https://api.github.com/users/mthielk/orgs", "repos_url": "https://api.github.com/users/mthielk/repos", "events_url": "https://api.github.com/users/mthielk/events{/privacy}", "received_events_url": "https://api.github.com/users/mthielk/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "This may be related to an incompatible pytorch or cuda version", "@mthielk - this can very well be due to the PyTorch version. Did you try with a more current version of PyTorch? ", "@patrickvonplaten I face the same issue with PyTorch version 1.4.0. ", "I can confirm that this error occurs with PyTorch version 1.4.0!", "Okey, after some internal discussion the error is the following. PyTorch changed its `torch.save()` method officially in PyTorch 1.6.0 (check https://github.com/pytorch/pytorch/releases for 1.6.0 under \"Deprecations\") which means that models saved with torch >= 1.6.0 are not loadable with torch <= 1.4.0 -> hence this error. So for RAG the minimum required torch version is torch 1.5.0 it seems. (thanks @sgugger @LysandreJik )" ]
1,601
1,606
1,606
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-4.4.0-1113-aws-x86_64-with-debian-stretch-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.3.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @VictorSanh @patrickvonplaten @sshleifer transformers/modeling_utils.py <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): RAG The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. install new conda env py=3.7 2. install RAG requirements 3. run example code from https://huggingface.co/transformers/master/model_doc/rag.html ```python Python 3.7.9 (default, Aug 31 2020, 12:42:55) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration >>> import torch >>> tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") >>> retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) Using custom data configuration dummy.psgs_w100.nq.no_index Reusing dataset wiki_dpr (/homes/thielk/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.no_index/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2) Using custom data configuration dummy.psgs_w100.nq.exact Reusing dataset wiki_dpr (/homes/thielk/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2) >>> model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) ``` stack trace: ```python Traceback (most recent call last): File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 187, in nti n = int(s.strip() or "0", 8) ValueError: invalid literal for int() with base 8: 'del.embe' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 2289, in next tarinfo = self.tarinfo.fromtarfile(self) File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 1095, in fromtarfile obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors) File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 1037, in frombuf chksum = nti(buf[148:156]) File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 189, in nti raise InvalidHeaderError("invalid header") tarfile.InvalidHeaderError: invalid header During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/site-packages/torch/serialization.py", line 595, in _load return legacy_load(f) File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/site-packages/torch/serialization.py", line 506, in legacy_load with closing(tarfile.open(fileobj=f, mode='r:', format=tarfile.PAX_FORMAT)) as tar, \ File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 1593, in open return func(name, filemode, fileobj, **kwargs) File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 1623, in taropen return cls(name, mode, fileobj, **kwargs) File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 1486, in __init__ self.firstmember = self.next() File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/tarfile.py", line 2301, in next raise ReadError(str(e)) tarfile.ReadError: invalid header During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/site-packages/transformers/modeling_utils.py", line 927, in from_pretrained state_dict = torch.load(resolved_archive_file, map_location="cpu") File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/site-packages/torch/serialization.py", line 426, in load return _load(f, map_location, pickle_module, **pickle_load_args) File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/site-packages/torch/serialization.py", line 599, in _load raise RuntimeError("{} is a zip archive (did you mean to use torch.jit.load()?)".format(f.name)) RuntimeError: /homes/thielk/.cache/torch/transformers/06fe449ffe41cbe16aeb1f5976989313464a3c44a605e9a8b91bf6440dfa6026.696574d8c17eafbac08f43f01e951252057f8feb133b64a33b76d4c47d65367a is a zip archive (did you mean to use torch.jit.load()?) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/homes/thielk/miniconda3/envs/transformers-pytorch/lib/python3.7/site-packages/transformers/modeling_utils.py", line 930, in from_pretrained "Unable to load weights from pytorch checkpoint file. " OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior be able to completely run example code from RAG documentation May be related to #7583 <!-- A clear and concise description of what you would expect to happen. --> ```python from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration import torch tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) # initialize with RagRetriever to do everything in one forward call model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) input_dict = tokenizer.prepare_seq2seq_batch("How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="pt") input_ids = input_dict["input_ids"] outputs = model(input_ids=input_ids, labels=input_dict["labels"]) # or use retriever seperately model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", use_dummy_dataset=True) # 1. Encode question_hidden_states = model.question_encoder(input_ids)[0] # 2. Retrieve docs_dict = retriever(input_ids.numpy(), question_hidden_states.detach().numpy(), return_tensors="pt") doc_scores = torch.bmm(question_hidden_states.unsqueeze(1), docs_dict["retrieved_doc_embeds"].float().transpose(1, 2)).squeeze(1) # 3. Forward to generator outputs = model(context_input_ids=docs_dict["context_input_ids"], context_attention_mask=docs_dict["context_attention_mask"], doc_scores=doc_scores, decoder_input_ids=input_dict["labels"]) # or directly generate generated = model.generate(input_ids=input_dict["input_ids"]) generated_string = tokenizer.batch_decode(generated, skip_special_tokens=True) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7594/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7593
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7593/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7593/comments
https://api.github.com/repos/huggingface/transformers/issues/7593/events
https://github.com/huggingface/transformers/pull/7593
715,053,354
MDExOlB1bGxSZXF1ZXN0NDk4MDI3Mjk1
7,593
[bart] fix config.classif_dropout
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This breaks backwards compatibility on saved `classif_dropout`, but from my checks this is always set to 0 (so incorrect) anyways and will stay 0, so I'm not too concerned.", "```python\r\nfrom transformers import BartConfig\r\nconfig_to_save = BartConfig.from_pretrained('facebook/bart-base', classif_dropout=0.42)\r\nconfig_to_save.classif_dropout # AttributeError\r\n```" ]
1,601
1,601
1,601
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7593/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7593/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7593", "html_url": "https://github.com/huggingface/transformers/pull/7593", "diff_url": "https://github.com/huggingface/transformers/pull/7593.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7593.patch", "merged_at": 1601998432000 }
https://api.github.com/repos/huggingface/transformers/issues/7592
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7592/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7592/comments
https://api.github.com/repos/huggingface/transformers/issues/7592/events
https://github.com/huggingface/transformers/issues/7592
715,016,960
MDU6SXNzdWU3MTUwMTY5NjA=
7,592
Using `-1` to mask the loss for the token is deprecated. Please use `-100` instead.
{ "login": "Neptune-Trojans", "id": 68503564, "node_id": "MDQ6VXNlcjY4NTAzNTY0", "avatar_url": "https://avatars.githubusercontent.com/u/68503564?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Neptune-Trojans", "html_url": "https://github.com/Neptune-Trojans", "followers_url": "https://api.github.com/users/Neptune-Trojans/followers", "following_url": "https://api.github.com/users/Neptune-Trojans/following{/other_user}", "gists_url": "https://api.github.com/users/Neptune-Trojans/gists{/gist_id}", "starred_url": "https://api.github.com/users/Neptune-Trojans/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Neptune-Trojans/subscriptions", "organizations_url": "https://api.github.com/users/Neptune-Trojans/orgs", "repos_url": "https://api.github.com/users/Neptune-Trojans/repos", "events_url": "https://api.github.com/users/Neptune-Trojans/events{/privacy}", "received_events_url": "https://api.github.com/users/Neptune-Trojans/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@Paul-Trax ,\r\n\r\nThe warning you saw is not because you have some `-1` in the labels. It is because the computation was done inside a tensorflow graph, which was compiled before the computation. While compiling a graph, the different branches are entered, so you saw the warning. Once the real computation begins, i.e. your labels and logits used for computation, everything is fine.\r\n\r\nAn example to see such effect is (note that there is a `@tf.function` before `compute_loss`):\r\n\r\n import tensorflow as tf\r\n from typing import Dict, List, Optional, Union\r\n import warnings\r\n\r\n\r\n def shape_list(x: tf.Tensor) -> List[int]:\r\n \"\"\"\r\n Deal with dynamic shape in tensorflow cleanly.\r\n\r\n Args:\r\n x (:obj:`tf.Tensor`): The tensor we want the shape of.\r\n\r\n Returns:\r\n :obj:`List[int]`: The shape of the tensor as a list.\r\n \"\"\"\r\n static = x.shape.as_list()\r\n dynamic = tf.shape(x)\r\n return [dynamic[i] if s is None else s for i, s in enumerate(static)]\r\n\r\n\r\n @tf.function\r\n def compute_loss(labels, logits):\r\n\r\n loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(\r\n from_logits=True, reduction=tf.keras.losses.Reduction.NONE\r\n )\r\n # make sure only labels that are not equal to -100\r\n # are taken into account as loss\r\n if tf.math.reduce_any(labels == -1):\r\n warnings.warn(\"Using `-1` to mask the loss for the token is deprecated. Please use `-100` instead.\")\r\n active_loss = tf.reshape(labels, (-1,)) != -1\r\n print(f'During graph compiling - branch 1: {labels}')\r\n tf.print(f'Executed in graph - branch 1: {labels}')\r\n else:\r\n active_loss = tf.reshape(labels, (-1,)) != -100\r\n print(f'During graph compiling - branch 2: {labels}')\r\n tf.print(f'Executed in graph - branch 2: {labels}')\r\n\r\n reduced_logits = tf.boolean_mask(tf.reshape(logits, (-1, shape_list(logits)[2])), active_loss)\r\n\r\n labels = tf.boolean_mask(tf.reshape(labels, (-1,)), active_loss)\r\n\r\n return loss_fn(labels, reduced_logits)\r\n\r\n\r\n batch_size = 3\r\n seq_len = 5\r\n dim = 4\r\n labels = tf.constant(0, shape=[batch_size, seq_len])\r\n logits = tf.random.uniform(shape=[batch_size, seq_len, dim])\r\n\r\n loss = compute_loss(labels, logits)\r\n\r\n print(f'loss = {loss}')\r\n\r\nYou will see something like\r\n\r\n /home/imo/Desktop/venv/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py:493: UserWarning: Using `-1` to mask the loss for the token is deprecated. Please use `-100` instead.\r\n return py_builtins.overload_of(f)(*args)\r\n During graph compiling - branch 1: Tensor(\"labels:0\", shape=(3, 5), dtype=int32)\r\n During graph compiling - branch 2: Tensor(\"labels:0\", shape=(3, 5), dtype=int32)\r\n Executed in graph - branch 2: Tensor(\"labels:0\", shape=(3, 5), dtype=int32)\r\n loss = [1.5343634 1.610856 1.433133 1.4082022 1.5018827 1.0152265 1.563687\r\n 1.2404382 1.1259079 1.7140993 1.4652599 1.6314502 1.5104814 1.389543\r\n 1.45472 ]\r\n\r\nIf you remove the `tf.function` above `compute_loss`, there is no graph compiled, and you won't see the warning you had.\r\n\r\n During graph compiling - branch 2: [[0 0 0 0 0] # There is no graph compiled, it is just our print statement.\r\n [0 0 0 0 0]\r\n [0 0 0 0 0]]\r\n Executed in graph - branch 2: [[0 0 0 0 0]\r\n [0 0 0 0 0]\r\n [0 0 0 0 0]]\r\n loss = [1.648417 1.2457228 1.5540932 1.7658947 1.4607204 1.529434 1.3607037\r\n 1.6142995 0.9669408 1.316714 1.3906621 1.689343 1.3678703 1.324768\r\n 1.5207067]\r\n\r\nWhen `TFTrainer` is used, the computation is done in graph model.", "Hi chiapas,\r\nThanks a lot for the detailed answer, it was really helpful!\r\nI am closing the issue." ]
1,601
1,602
1,602
NONE
null
- `transformers` version: 3.3.1 - Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid - Python version: 3.6.8 - PyTorch version (GPU?): 1.3.1 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: No ### Who can help @sgugger ## Information Model I am using (Bert, XLNet ...): BERT The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Just run attached script ## Expected behavior I dont have '-1' masks with my labels but I get those warnings. Expected behavior not to have those warnings when I make training. <!-- A clear and concise description of what you would expect to happen. --> `python3.6/site-packages/tensorflow/python/autograph/impl/api.py:493: UserWarning: Using `-1` to mask the loss for the token is deprecated. Please use `-100` instead.` return py_builtins.overload_of(f)(*args) Code that reproduces the issue: ``` import os os.environ["CUDA_VISIBLE_DEVICES"] = "-1" import transformers import numpy as np import tensorflow as tf from transformers import BertConfig, TFTrainer, TFTrainingArguments, TFBertForTokenClassification transformers.logging.set_verbosity_info() labels = np.ones((32, 18)) labels_as_tensor = tf.convert_to_tensor( labels, dtype=tf.int32, dtype_hint=None, name=None ) inputs_embeds = np.random.normal(size=(32, 18, 768)) inputs_embeds_as_tensor = tf.convert_to_tensor( inputs_embeds, dtype=tf.float32, dtype_hint=None, name=None ) token_type_ids = np.ones((32, 18)) token_type_ids_as_tensor = tf.convert_to_tensor( token_type_ids, dtype=tf.int32, dtype_hint=None, name=None ) batch = ({ 'inputs_embeds': inputs_embeds_as_tensor, 'token_type_ids': token_type_ids_as_tensor }, labels_as_tensor) training_args = TFTrainingArguments(output_dir='~/tensorboard', overwrite_output_dir=True, learning_rate=0.001, logging_dir='~/tensorboard', debug=True, do_train=True, do_predict=True, num_train_epochs=2, per_device_train_batch_size=32, per_device_eval_batch_size=32, save_total_limit=3, evaluate_during_training=True, eval_steps=5) with training_args.strategy.scope(): config = BertConfig(num_labels=1274, output_hidden_states=False, num_hidden_layers=3) model = TFBertForTokenClassification(config) trainer = TFTrainer(model=model, args=training_args) trainer.train_loss = tf.keras.metrics.Sum() trainer.create_optimizer_and_scheduler(20) trainer.distributed_training_steps(batch) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7592/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7591
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7591/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7591/comments
https://api.github.com/repos/huggingface/transformers/issues/7591/events
https://github.com/huggingface/transformers/issues/7591
715,015,183
MDU6SXNzdWU3MTUwMTUxODM=
7,591
BartConfig saving and loading inconsistency
{ "login": "Liyang90", "id": 17171233, "node_id": "MDQ6VXNlcjE3MTcxMjMz", "avatar_url": "https://avatars.githubusercontent.com/u/17171233?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Liyang90", "html_url": "https://github.com/Liyang90", "followers_url": "https://api.github.com/users/Liyang90/followers", "following_url": "https://api.github.com/users/Liyang90/following{/other_user}", "gists_url": "https://api.github.com/users/Liyang90/gists{/gist_id}", "starred_url": "https://api.github.com/users/Liyang90/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Liyang90/subscriptions", "organizations_url": "https://api.github.com/users/Liyang90/orgs", "repos_url": "https://api.github.com/users/Liyang90/repos", "events_url": "https://api.github.com/users/Liyang90/events{/privacy}", "received_events_url": "https://api.github.com/users/Liyang90/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "The input argument for `BartConfig.__init__()` should be named `classif_dropout ` instead of `classifier_dropout`", "Great catch, thanks!" ]
1,601
1,601
1,601
CONTRIBUTOR
null
## Environment info ### Who can help Bart: @sshleifer ## Information Model I am using (Bert, XLNet ...): Bart The problem arises when using: * [ ] the official example scripts: (give details below) * [x ] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: ``` from transformers import BartConfig config_to_save = BartConfig.from_pretrained('facebook/bart-base', classif_dropout=0.42) config_to_save.save_pretrained('./') config_loaded = BartConfig.from_pretrained('./') assert config_to_save.classif_dropout == config_loaded.classif_dropout, "what?" ``` ## Expected behavior Should raise no error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7591/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7590
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7590/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7590/comments
https://api.github.com/repos/huggingface/transformers/issues/7590/events
https://github.com/huggingface/transformers/pull/7590
714,968,774
MDExOlB1bGxSZXF1ZXN0NDk3OTU4NjQz
7,590
Update README.md
{ "login": "dartrevan", "id": 24587263, "node_id": "MDQ6VXNlcjI0NTg3MjYz", "avatar_url": "https://avatars.githubusercontent.com/u/24587263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dartrevan", "html_url": "https://github.com/dartrevan", "followers_url": "https://api.github.com/users/dartrevan/followers", "following_url": "https://api.github.com/users/dartrevan/following{/other_user}", "gists_url": "https://api.github.com/users/dartrevan/gists{/gist_id}", "starred_url": "https://api.github.com/users/dartrevan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dartrevan/subscriptions", "organizations_url": "https://api.github.com/users/dartrevan/orgs", "repos_url": "https://api.github.com/users/dartrevan/repos", "events_url": "https://api.github.com/users/dartrevan/events{/privacy}", "received_events_url": "https://api.github.com/users/dartrevan/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,601
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7590/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7590", "html_url": "https://github.com/huggingface/transformers/pull/7590", "diff_url": "https://github.com/huggingface/transformers/pull/7590.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7590.patch", "merged_at": 1602103030000 }
https://api.github.com/repos/huggingface/transformers/issues/7589
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7589/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7589/comments
https://api.github.com/repos/huggingface/transformers/issues/7589/events
https://github.com/huggingface/transformers/issues/7589
714,960,020
MDU6SXNzdWU3MTQ5NjAwMjA=
7,589
run_language_modeling.py TPU issue during evaluation
{ "login": "Shiro-LK", "id": 26505641, "node_id": "MDQ6VXNlcjI2NTA1NjQx", "avatar_url": "https://avatars.githubusercontent.com/u/26505641?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Shiro-LK", "html_url": "https://github.com/Shiro-LK", "followers_url": "https://api.github.com/users/Shiro-LK/followers", "following_url": "https://api.github.com/users/Shiro-LK/following{/other_user}", "gists_url": "https://api.github.com/users/Shiro-LK/gists{/gist_id}", "starred_url": "https://api.github.com/users/Shiro-LK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shiro-LK/subscriptions", "organizations_url": "https://api.github.com/users/Shiro-LK/orgs", "repos_url": "https://api.github.com/users/Shiro-LK/repos", "events_url": "https://api.github.com/users/Shiro-LK/events{/privacy}", "received_events_url": "https://api.github.com/users/Shiro-LK/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This seems like more of a TPU issue than a `huggingface/transformers` issue. Do you mind copying the full output of your command? Maybe in a `pastebin` or a github gist if it doesn't fit here.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,601
1,608
1,608
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: ubuntu - Python version: 3.7 - PyTorch version (GPU?): xla-nightly - Tensorflow version (GPU?): - Using GPU in script?: no TPU - Using distributed or parallel set-up in script?: yes ### Who can help @sgugger @LysandreJik ## Information Model I am using (Bert, XLNet ...): The problem arises when using: [*] the official example scripts: (give details below) run_language_modeling.py The tasks I am working on is: [*] my own task or dataset: (give details below) txt file for pretraining Roberta model ## To reproduce I am trying to launch a pretraining using run_language_modeling.py with TPU unfortunately I got some issue during the evaluation and logging_steps with error message 1. `python run_language_modeling.py --model_name_or_path="roberta-base" --model_type="roberta" --tokenizer_name="roberta-base" --do_train --evaluate_during_training --mlm --mlm_probability=0.15 --train_data_file="train.txt" --eval_data_file="val.txt" --do_eval --per_device_train_batch=8 --per_device_eval_batch=8 --output_dir="robertaweet" --max_steps=5000000 --logging_dir="log_bertweet" --logging_steps=20 --eval_steps=10 --save_steps=25 --dataloader_num_workers=0 --tpu_num_cores=8 --learning_rate=1e-4` 2. 3. `020-10-05 15:36:48.946219: W 11372 tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:160] RPC failed with status = "Unavailable: Socket closed" and grpc_error_string = "{"created":"@1601912208.946077389","description":"Error received from peer ipv4:10.255.226.90:8470","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC` ## Expected behavior training without any error message
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7589/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7589/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7588
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7588/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7588/comments
https://api.github.com/repos/huggingface/transformers/issues/7588/events
https://github.com/huggingface/transformers/pull/7588
714,956,404
MDExOlB1bGxSZXF1ZXN0NDk3OTQ4NjQ3
7,588
[makefile] check only .py files
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,601
1,601
CONTRIBUTOR
null
in the `fixup` target add `egrep .py$` to feed black/isort/flake8 only .py files as apparently some of them complain if that's not the case. Fixes: #7579 @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7588/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7588/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7588", "html_url": "https://github.com/huggingface/transformers/pull/7588", "diff_url": "https://github.com/huggingface/transformers/pull/7588.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7588.patch", "merged_at": 1601976321000 }
https://api.github.com/repos/huggingface/transformers/issues/7587
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7587/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7587/comments
https://api.github.com/repos/huggingface/transformers/issues/7587/events
https://github.com/huggingface/transformers/pull/7587
714,940,497
MDExOlB1bGxSZXF1ZXN0NDk3OTM1NzM2
7,587
Fix squeezebert docs
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,601
1,601
MEMBER
null
Slightly update the SqueezeBERT documentation to fit standards before the documentation gods are angered.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7587/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7587/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7587", "html_url": "https://github.com/huggingface/transformers/pull/7587", "diff_url": "https://github.com/huggingface/transformers/pull/7587.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7587.patch", "merged_at": 1601979724000 }
https://api.github.com/repos/huggingface/transformers/issues/7586
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7586/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7586/comments
https://api.github.com/repos/huggingface/transformers/issues/7586/events
https://github.com/huggingface/transformers/pull/7586
714,917,574
MDExOlB1bGxSZXF1ZXN0NDk3OTE2NjA1
7,586
Documentation framework toggle should stick
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,601
1,601
MEMBER
null
# What does this PR do? This PR adds the following feature: when clicking on a `PyTorch` or `TensorFlow` button in the documentation in order to show the corresponding framework code sample, the toggle takes effect on all the current pages' code samples. TensorFlow users won't need to click on every code sample to convert it to TensorFlow anymore!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7586/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7586/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7586", "html_url": "https://github.com/huggingface/transformers/pull/7586", "diff_url": "https://github.com/huggingface/transformers/pull/7586.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7586.patch", "merged_at": 1601911437000 }
https://api.github.com/repos/huggingface/transformers/issues/7585
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7585/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7585/comments
https://api.github.com/repos/huggingface/transformers/issues/7585/events
https://github.com/huggingface/transformers/pull/7585
714,904,104
MDExOlB1bGxSZXF1ZXN0NDk3OTA1NzUw
7,585
Documentation fixes
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,601
1,601
COLLABORATOR
null
# What does this PR do? This PR fixes two issues with the documentation: - wrong type annotation in the configurations (see #7559) - wrong example for masked LM models (see this [froum post](https://discuss.huggingface.co/t/questions-on-the-bertmodellmheadmodel/1317/6)) Fixes #7559
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7585/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7585", "html_url": "https://github.com/huggingface/transformers/pull/7585", "diff_url": "https://github.com/huggingface/transformers/pull/7585.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7585.patch", "merged_at": 1601910064000 }
https://api.github.com/repos/huggingface/transformers/issues/7584
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7584/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7584/comments
https://api.github.com/repos/huggingface/transformers/issues/7584/events
https://github.com/huggingface/transformers/issues/7584
714,899,717
MDU6SXNzdWU3MTQ4OTk3MTc=
7,584
XLNet evaluation fails if the size of evaluation set can't be divided by a given evaluation batch size
{ "login": "StepinSilence", "id": 25417535, "node_id": "MDQ6VXNlcjI1NDE3NTM1", "avatar_url": "https://avatars.githubusercontent.com/u/25417535?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StepinSilence", "html_url": "https://github.com/StepinSilence", "followers_url": "https://api.github.com/users/StepinSilence/followers", "following_url": "https://api.github.com/users/StepinSilence/following{/other_user}", "gists_url": "https://api.github.com/users/StepinSilence/gists{/gist_id}", "starred_url": "https://api.github.com/users/StepinSilence/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StepinSilence/subscriptions", "organizations_url": "https://api.github.com/users/StepinSilence/orgs", "repos_url": "https://api.github.com/users/StepinSilence/repos", "events_url": "https://api.github.com/users/StepinSilence/events{/privacy}", "received_events_url": "https://api.github.com/users/StepinSilence/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "The XLNet model outputs some past states called `mems` at index 2. Those can't be concatenated together because they have a sequence length that varies. You should pass along `--past_index 2` to your script so that:\r\n1. those `mems` are used\r\n2. they are discarded from the predictions, and thus evaluation should work.\r\n\r\nWe will have something easier to use in the future, but for now it should work around your problem.", "Thanks for your fast reply. Unfortunately ```--past_index 2``` doesn't work for me. \r\nNew error logs\r\n```bash\r\n10/05/2020 22:55:40 - INFO - filelock - Lock 140417916796544 acquired on /data/home/liusishun/glue/SST-2/cached_dev_XLNetTokenizer_64_sst-2.lock\r\n10/05/2020 22:55:41 - INFO - filelock - Lock 140417916796544 released on /data/home/liusishun/glue/SST-2/cached_dev_XLNetTokenizer_64_sst-2.lock\r\n10/05/2020 22:55:44 - INFO - __main__ - *** Evaluate ***\r\nEvaluation: 93%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉ | 13/14 [00:09<00:00, 1.41it/s]\r\nTraceback (most recent call last):\r\n File \"/data/home/liusishun/applications/transformers/examples/text-classification/run_glue.py\", line 247, in <module>\r\n main()\r\n File \"/data/home/liusishun/applications/transformers/examples/text-classification/run_glue.py\", line 197, in main\r\n eval_result = trainer.evaluate(eval_dataset=eval_dataset)\r\n File \"/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer.py\", line 1297, in evaluate\r\n output = self.prediction_loop(eval_dataloader, description=\"Evaluation\")\r\n File \"/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer.py\", line 1377, in prediction_loop\r\n loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only)\r\n File \"/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer.py\", line 1459, in prediction_step\r\n outputs = model(**inputs)\r\n File \"/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/modeling_xlnet.py\", line 1499, in forward\r\n transformer_outputs = self.transformer(\r\n File \"/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 532, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/modeling_xlnet.py\", line 1226, in forward\r\n new_mems = new_mems + (self.cache_mem(output_h, mems[i]),)\r\n File \"/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/modeling_xlnet.py\", line 1011, in cache_mem\r\n new_mem = torch.cat([prev_mem, curr_out], dim=0)[cutoff:]\r\nRuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 40 and 64 in dimension 1 at /opt/conda/conda-bld/pytorch_1579061855666/work/aten/src/THC/generic/THCTensorMath.cu:71\r\n```\r\n\r\ncurrent script\r\n```bash\r\nGLUE_DIR=~/glue\r\nCUDA_VISIBLE_DEVICES=0\r\nTASK_NAME=SST-2\r\n\r\npython3 ~/applications/transformers/examples/text-classification/run_glue.py \\\r\n --model_name_or_path ~/xlnet \\\r\n --task_name $TASK_NAME \\\r\n --do_eval \\\r\n --data_dir $GLUE_DIR/$TASK_NAME \\\r\n --max_seq_length 64 \\\r\n --per_device_train_batch_size 32 \\\r\n --per_device_eval_batch_size 64 \\\r\n --past_index 2 \\\r\n --learning_rate 2e-5 \\\r\n --num_train_epochs 3.0 \\\r\n --output_dir ~/result/$TASK_NAME/ \\\r\n --overwrite_output_dir \\\r\n --eval_steps 100 \\\r\n```\r\nAny idea?", "Asking for the XLNet specialists on our internal slack. I think the main problem is that the model returns those mems that can't be used for anything (and can't be concatenated). The fact you have an error with `past_index` show they can't really be used to speed up sequence classification.", "Thanks for your response. Could you have any temporary workarounds or further actions about this problem?", "Use another model...", "Hi @StepinSilence and @sgugger ! Any updates on this issue?\r\n@StepinSilence were able to find a work around to use XLNet?", "Hi, @adhithyaarun. I remember that this issue occurred when batch size couldn't divide the dataset size, so if you set the batch size a factor of the size of your dataset it may work. However, I can't confirm this right now because our server data disk died several days ago.", "Hello. I encountered the same problem using a Camembert Model with transformers 3.4.0. This issue seems to rise when using dynamic padding. Any workaround for this other than padding to max length?", "You should update to 3.5.0, which contains a fix for this in `Trainer`, to be able to do evaluation with dynamic padding.", "From reading the paper (especilally the experiment part about SQuad, RACE, ...) I originally thought that the cached memory was also used during fine-tuning and not just during pre-training, but from this description here: https://github.com/zihangdai/xlnet/issues/41#issuecomment-505102587 it seems like the cached memory is actually not used during fine-tuning. So I'd suggest that we disable it for all models except `XLNetLMHeadModel` where it obviously makes sense to use it. I'll add a PR to fix it", "Really thank all of you for fixing this issue!" ]
1,601
1,617
1,606
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-4.15.0-117-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @sgugger ## Information Model I am using (Bert, XLNet ...): XLNet-base-cased The problem arises when using: * the official example scripts: run_glue.py The tasks I am working on is: * an official GLUE/SQUaD task: SST-2 ## To reproduce Steps to reproduce the behavior: 1. Install transformers from master and download SST-2 data using ```download_glue_data.py``` 2. Create the following scripts ```bash GLUE_DIR=~/glue CUDA_VISIBLE_DEVICES=0 TASK_NAME=SST-2 python3 ~/applications/transformers/examples/text-classification/run_glue.py \ --model_name_or_path ~/xlnet \ --task_name $TASK_NAME \ --do_eval \ --data_dir $GLUE_DIR/$TASK_NAME \ --max_seq_length 64 \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 64 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir ~/result/$TASK_NAME/ \ --overwrite_output_dir \ --eval_steps 100 ``` 3. run this script ## Expected behavior Trainer should return appropriate evaluation results. Here are logs when evaluating bert-base with above-given hyperparameters. ```bash 10/05/2020 22:28:47 - INFO - filelock - Lock 140392033291808 acquired on /data/home/liusishun/glue/SST-2/cached_dev_BertTokenizer_64_sst-2.lock 10/05/2020 22:28:47 - INFO - filelock - Lock 140392033291808 released on /data/home/liusishun/glue/SST-2/cached_dev_BertTokenizer_64_sst-2.lock 10/05/2020 22:28:50 - INFO - __main__ - *** Evaluate *** Evaluation: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14/14 [00:01<00:00, 7.22it/s] {'eval_loss': 0.6916399122378148, 'eval_acc': 0.49770642201834864, 'step': 0} /data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer.py:1168: FutureWarning: This method is deprecated, use `Trainer.is_world_process_zero()` instead. warnings.warn("This method is deprecated, use `Trainer.is_world_process_zero()` instead.", FutureWarning) 10/05/2020 22:28:52 - INFO - __main__ - ***** Eval results sst-2 ***** 10/05/2020 22:28:52 - INFO - __main__ - eval_loss = 0.6916399122378148 10/05/2020 22:28:52 - INFO - __main__ - eval_acc = 0.49770642201834864 ``` ## Observed behavior ```bash 10/05/2020 22:30:05 - INFO - filelock - Lock 139928226197216 acquired on /data/home/liusishun/glue/SST-2/cached_dev_XLNetTokenizer_64_sst-2.lock 10/05/2020 22:30:05 - INFO - filelock - Lock 139928226197216 released on /data/home/liusishun/glue/SST-2/cached_dev_XLNetTokenizer_64_sst-2.lock 10/05/2020 22:30:09 - INFO - __main__ - *** Evaluate *** Evaluation: 93%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉ | 13/14 [00:02<00:00, 4.44it/s] Traceback (most recent call last): File "/data/home/liusishun/applications/transformers/examples/text-classification/run_glue.py", line 247, in <module> main() File "/data/home/liusishun/applications/transformers/examples/text-classification/run_glue.py", line 197, in main eval_result = trainer.evaluate(eval_dataset=eval_dataset) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer.py", line 1297, in evaluate output = self.prediction_loop(eval_dataloader, description="Evaluation") File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer.py", line 1382, in prediction_loop preds = logits if preds is None else nested_concat(preds, logits, dim=0) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer_utils.py", line 151, in nested_concat return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors)) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer_utils.py", line 151, in <genexpr> return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors)) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer_utils.py", line 151, in nested_concat return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors)) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer_utils.py", line 151, in <genexpr> return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors)) File "/data/home/liusishun/.conda/envs/myenv/lib/python3.8/site-packages/transformers/trainer_utils.py", line 152, in nested_concat return torch.cat((tensors, new_tensors), dim=dim) RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 40 and 64 in dimension 1 at /opt/conda/conda-bld/pytorch_1579061855666/work/aten/src/THC/generic/THCTensorMath.cu:71 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7584/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7584/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7583
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7583/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7583/comments
https://api.github.com/repos/huggingface/transformers/issues/7583/events
https://github.com/huggingface/transformers/issues/7583
714,891,990
MDU6SXNzdWU3MTQ4OTE5OTA=
7,583
RagRetriever.from_pretrained doesn't get another cache_dir.
{ "login": "josemlopez", "id": 4112135, "node_id": "MDQ6VXNlcjQxMTIxMzU=", "avatar_url": "https://avatars.githubusercontent.com/u/4112135?v=4", "gravatar_id": "", "url": "https://api.github.com/users/josemlopez", "html_url": "https://github.com/josemlopez", "followers_url": "https://api.github.com/users/josemlopez/followers", "following_url": "https://api.github.com/users/josemlopez/following{/other_user}", "gists_url": "https://api.github.com/users/josemlopez/gists{/gist_id}", "starred_url": "https://api.github.com/users/josemlopez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/josemlopez/subscriptions", "organizations_url": "https://api.github.com/users/josemlopez/orgs", "repos_url": "https://api.github.com/users/josemlopez/repos", "events_url": "https://api.github.com/users/josemlopez/events{/privacy}", "received_events_url": "https://api.github.com/users/josemlopez/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
[ "tagging @patrickvonplaten who will be more suitable to help", "Hey @josemlopez - thanks for the issue. @lhoestq - I think we should add an argument to the `RagRetriever.from_pretrained(...)` that passes the cache dir to the `load_dataset` function, no? What do you think? ", "Thanks for your work guys.\r\n\r\nBTW, in case this can be helpful. \r\nI've move my things so I can have enough room for the dataset in \"/root/.cache/huggingface/datasets/\".\r\n\r\nDoing that, I've suffered this error. I can't say if it is related or not:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nUnpicklingError Traceback (most recent call last)\r\n/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 459 try:\r\n--> 460 return pickle.load(fid, **pickle_kwargs)\r\n 461 except Exception:\r\n\r\nUnpicklingError: pickle data was truncated\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 552 # Prepare split will record examples associated to the split\r\n--> 553 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 554 except OSError:\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _prepare_split(self, split_generator)\r\n 840 for key, record in utils.tqdm(\r\n--> 841 generator, unit=\" examples\", total=split_info.num_examples, leave=False, disable=not_verbose\r\n 842 ):\r\n\r\n/opt/conda/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)\r\n 217 try:\r\n--> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):\r\n 219 # return super(tqdm...) will not catch exception\r\n\r\n/opt/conda/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)\r\n 1128 try:\r\n-> 1129 for obj in iterable:\r\n 1130 yield obj\r\n\r\n~/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)\r\n 131 break\r\n--> 132 vecs = np.load(open(vectors_files.pop(0), \"rb\"), allow_pickle=True)\r\n 133 vec_idx = 0\r\n\r\n/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 462 raise IOError(\r\n--> 463 \"Failed to interpret file %s as a pickle\" % repr(file))\r\n 464 finally:\r\n\r\nOSError: Failed to interpret file <_io.BufferedReader name='/root/.cache/huggingface/datasets/downloads/f34d5f091294259b4ca90e813631e69a6ded660d71b6cbedf89ddba50df94448'> as a pickle\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n<ipython-input-6-f28df370ac47> in <module>\r\n 1 # ln -s /workspace/notebooks/POCs/cache /root/.cache/huggingface/datasets\r\n----> 2 retriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=False)\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)\r\n 307 generator_tokenizer = rag_tokenizer.generator\r\n 308 return cls(\r\n--> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer\r\n 310 )\r\n 311 \r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)\r\n 298 self.config = config\r\n 299 if self._init_retrieval:\r\n--> 300 self.init_retrieval()\r\n 301 \r\n 302 @classmethod\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_retrieval(self)\r\n 324 \r\n 325 logger.info(\"initializing retrieval\")\r\n--> 326 self.index.init_index()\r\n 327 \r\n 328 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_index(self)\r\n 238 split=self.dataset_split,\r\n 239 index_name=self.index_name,\r\n--> 240 dummy=self.use_dummy_dataset,\r\n 241 )\r\n 242 self.dataset.set_format(\"numpy\", columns=[\"embeddings\"], output_all_columns=True)\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 609 download_config=download_config,\r\n 610 download_mode=download_mode,\r\n--> 611 ignore_verifications=ignore_verifications,\r\n 612 )\r\n 613 \r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 474 if not downloaded_from_gcs:\r\n 475 self._download_and_prepare(\r\n--> 476 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 477 )\r\n 478 # Sync info\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 553 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 554 except OSError:\r\n--> 555 raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\n 556 \r\n 557 if verify_infos:\r\n\r\nOSError: Cannot find data file. \r\n```\r\n\r\nWhen running this: \r\n\r\n`retriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=False)`", "> Hey @josemlopez - thanks for the issue. @lhoestq - I think we should add an argument to the `RagRetriever.from_pretrained(...)` that passes the cache dir to the `load_dataset` function, no? What do you think?\r\n\r\nSure we can add `cache_dir=...` to `RagRetriever.from_pretrained`.\r\nIn the meantime you can specify `HF_DATASETS_CACHE` to tell where to store the dataset used by RAG for retrieval\r\n\r\n> Thanks for your work guys.\r\n> \r\n> BTW, in case this can be helpful.\r\n> I've move my things so I can have enough room for the dataset in \"/root/.cache/huggingface/datasets/\".\r\n> \r\n> Doing that, I've suffered this error. I can't say if it is related or not:\r\n> ...\r\n> When running this:\r\n> \r\n> `retriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=False)`\r\n\r\nCould you create an issue on the `datasets` repo ? this seems unrelated ", "Hi @lhoestq , \r\n\r\n>In the meantime you can specify HF_DATASETS_CACHE to tell where to store the dataset used by RAG for retrieval\r\n\r\nHF_DATASETS_CACHE works fine: \r\n\r\n```\r\nretriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=False)\r\n\r\nUsing custom data configuration psgs_w100.nq.no_index\r\nReusing dataset wiki_dpr (/my_cache/cache/wiki_dpr/psgs_w100.nq.no_index/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2)\r\nUsing custom data configuration psgs_w100.nq.exact\r\n\r\nDownloading and preparing dataset wiki_dpr/psgs_w100.nq.exact (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /my_cache/cache/wiki_dpr/psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2...\r\n```\r\n\r\n>Could you create an issue on the datasets repo ? this seems unrelated\r\n\r\nsure, I'll post the other issue in the datasets repo.\r\n\r\nThanks!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,601
1,608
1,608
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-4.19 - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 - Tensorflow version (GPU?): No - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @VictorSanh ## Information Model I am using RAG: The problem arises when using: * [x] the official example scripts: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Open notebook 2. Run the example code changing the 'TRANSFORMERS_CACHE' path to place the dataset in another place than the default one ``` import os os.environ['TRANSFORMERS_CACHE'] = '/workspace/notebooks/POCs/cache' from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") # Here the data is placed in the expected path /workspace... retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False) # The dataset is placed in the default place /root/.cache/huggingface/datasets/wiki_dpr/psgs_w100.nq.no_index/0.0.0/ ``` ## Expected behavior `RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)` should place the data in the expected patch '/workspace/notebooks/POCs/cache' I tried with as well with: ` retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", chache_dir='/workspace/notebooks/POCs/cache' use_dummy_dataset=False)` but it doesn't work neither.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7583/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7582
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7582/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7582/comments
https://api.github.com/repos/huggingface/transformers/issues/7582/events
https://github.com/huggingface/transformers/pull/7582
714,886,221
MDExOlB1bGxSZXF1ZXN0NDk3ODkxMTYw
7,582
[TF generation] Fix typo
{ "login": "SidJain1412", "id": 35868478, "node_id": "MDQ6VXNlcjM1ODY4NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/35868478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SidJain1412", "html_url": "https://github.com/SidJain1412", "followers_url": "https://api.github.com/users/SidJain1412/followers", "following_url": "https://api.github.com/users/SidJain1412/following{/other_user}", "gists_url": "https://api.github.com/users/SidJain1412/gists{/gist_id}", "starred_url": "https://api.github.com/users/SidJain1412/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SidJain1412/subscriptions", "organizations_url": "https://api.github.com/users/SidJain1412/orgs", "repos_url": "https://api.github.com/users/SidJain1412/repos", "events_url": "https://api.github.com/users/SidJain1412/events{/privacy}", "received_events_url": "https://api.github.com/users/SidJain1412/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I see that tests are failing, but shouldn't `min_length` and `top_k` simply not be allowed to go to zero?", "`min_length` defaults to 0 which is expteced behavior. `top_k` is 0 if it is not used => so I don't think we should do these changes.", "We can fix the typo though ;-) ", "My bad, that makes sense 😄 " ]
1,601
1,601
1,601
CONTRIBUTOR
null
# What does this PR do? Typo + Parameter Assertion Fix <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7582/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7582/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7582", "html_url": "https://github.com/huggingface/transformers/pull/7582", "diff_url": "https://github.com/huggingface/transformers/pull/7582.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7582.patch", "merged_at": 1601981237000 }
https://api.github.com/repos/huggingface/transformers/issues/7581
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7581/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7581/comments
https://api.github.com/repos/huggingface/transformers/issues/7581/events
https://github.com/huggingface/transformers/pull/7581
714,866,355
MDExOlB1bGxSZXF1ZXN0NDk3ODc0OTk3
7,581
Create README.md
{ "login": "abedkhooli", "id": 11407254, "node_id": "MDQ6VXNlcjExNDA3MjU0", "avatar_url": "https://avatars.githubusercontent.com/u/11407254?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abedkhooli", "html_url": "https://github.com/abedkhooli", "followers_url": "https://api.github.com/users/abedkhooli/followers", "following_url": "https://api.github.com/users/abedkhooli/following{/other_user}", "gists_url": "https://api.github.com/users/abedkhooli/gists{/gist_id}", "starred_url": "https://api.github.com/users/abedkhooli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abedkhooli/subscriptions", "organizations_url": "https://api.github.com/users/abedkhooli/orgs", "repos_url": "https://api.github.com/users/abedkhooli/repos", "events_url": "https://api.github.com/users/abedkhooli/events{/privacy}", "received_events_url": "https://api.github.com/users/abedkhooli/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Add model card for https://huggingface.co/akhooli/xlm-r-large-arabic-toxic" ]
1,601
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? Model card for https://huggingface.co/akhooli/xlm-r-large-arabic-toxic
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7581/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7581", "html_url": "https://github.com/huggingface/transformers/pull/7581", "diff_url": "https://github.com/huggingface/transformers/pull/7581.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7581.patch", "merged_at": 1602103060000 }
https://api.github.com/repos/huggingface/transformers/issues/7580
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7580/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7580/comments
https://api.github.com/repos/huggingface/transformers/issues/7580/events
https://github.com/huggingface/transformers/pull/7580
714,855,691
MDExOlB1bGxSZXF1ZXN0NDk3ODY2MzQ4
7,580
Expand test to locate flakiness
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,601
1,601
COLLABORATOR
null
# What does this PR do? `test_training_arguments_are_left_untouched` in `test_trainer.py` is a bit flaky, this PR just expands in a loop the assertEqual so we can hopefully locate the source of the flakiness.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7580/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7580/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7580", "html_url": "https://github.com/huggingface/transformers/pull/7580", "diff_url": "https://github.com/huggingface/transformers/pull/7580.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7580.patch", "merged_at": 1601905548000 }
https://api.github.com/repos/huggingface/transformers/issues/7579
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7579/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7579/comments
https://api.github.com/repos/huggingface/transformers/issues/7579/events
https://github.com/huggingface/transformers/issues/7579
714,852,226
MDU6SXNzdWU3MTQ4NTIyMjY=
7,579
make modified_only_fixup complains about non .py files
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "yes, the fix is trivial. Which tool is complaining?", "I was running `make modified_only_fixup` before merging master and `black` was complaining.\r\nBut after merging master, there is no complaining.\r\nAnd I should be using `make fixup`, so this might just be a UserError.\r\n\r\nShould we merge the linked PR anyways or wait to see if I run into this again?\r\n", "Indeed, `black` isn't smooth - it picks .py files when you give it a dir, but doesn't do the same if you give it explicit files:\r\n```\r\n$ black Makefile\r\nerror: cannot format Makefile: Cannot parse: 1:1: .PHONY: modified_only_fixup extra_quality_checks quality style fixup fix-copies test test-examples docs\r\nOh no! 💥 💔 💥\r\n1 file failed to reformat.\r\n```\r\nSo yes, please merge the linked PR.", "`make fixup` is just `make modified_only_fixup` + `make extra_quality_checks` so no user error" ]
1,601
1,601
1,601
CONTRIBUTOR
null
easy fix @stas00 ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7579/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7578
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7578/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7578/comments
https://api.github.com/repos/huggingface/transformers/issues/7578/events
https://github.com/huggingface/transformers/issues/7578
714,799,158
MDU6SXNzdWU3MTQ3OTkxNTg=
7,578
RobertaTokenizer.get_special_tokens_mask doesn't check for all special tokens, only for the sep and cls tokens
{ "login": "Muks14x", "id": 11333048, "node_id": "MDQ6VXNlcjExMzMzMDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/11333048?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Muks14x", "html_url": "https://github.com/Muks14x", "followers_url": "https://api.github.com/users/Muks14x/followers", "following_url": "https://api.github.com/users/Muks14x/following{/other_user}", "gists_url": "https://api.github.com/users/Muks14x/gists{/gist_id}", "starred_url": "https://api.github.com/users/Muks14x/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Muks14x/subscriptions", "organizations_url": "https://api.github.com/users/Muks14x/orgs", "repos_url": "https://api.github.com/users/Muks14x/repos", "events_url": "https://api.github.com/users/Muks14x/events{/privacy}", "received_events_url": "https://api.github.com/users/Muks14x/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,601
1,607
1,607
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: macOS-10.15.6-x86_64-i386-64bit - Python version: 3.8.3 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik @mfuntowicz <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): RoBERTa The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ## To reproduce ```python >>> from transformers import RobertaTokenizer, RobertaTokenizerFast >>> tokenizer_slow = RobertaTokenizer.from_pretrained('roberta-base') >>> tokenizer_fast = RobertaTokenizerFast.from_pretrained('roberta-base') >>> tokenizer_slow.add_special_tokens({'additional_special_tokens': ['<a>']}) 1 >>> tokenizer_fast.add_special_tokens({'additional_special_tokens': ['<a>']}) 1 >>> tokenizer_slow.get_special_tokens_mask(tokenizer_slow.encode('<a><pad><mask>'), already_has_special_tokens=True) [1, 0, 0, 0, 1] >>> tokenizer_fast.get_special_tokens_mask(tokenizer_fast.encode('<a><pad><mask>'), already_has_special_tokens=True) [1, 1, 1, 1, 1] ``` Steps to reproduce the behavior: 1. Run the above lines <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior RobertaTokenizer should also mask special tokens, like RobertaTokenizerFast does. Let me know if you need any additional info or could do with a PR. Not sure if this issue is present with other tokenizers. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7578/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7578/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7577
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7577/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7577/comments
https://api.github.com/repos/huggingface/transformers/issues/7577/events
https://github.com/huggingface/transformers/pull/7577
714,778,111
MDExOlB1bGxSZXF1ZXN0NDk3ODAzMjEy
7,577
Add support to provide initial tokens to decoder of encoder-decoder type models
{ "login": "ayushtiku5", "id": 40797286, "node_id": "MDQ6VXNlcjQwNzk3Mjg2", "avatar_url": "https://avatars.githubusercontent.com/u/40797286?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayushtiku5", "html_url": "https://github.com/ayushtiku5", "followers_url": "https://api.github.com/users/ayushtiku5/followers", "following_url": "https://api.github.com/users/ayushtiku5/following{/other_user}", "gists_url": "https://api.github.com/users/ayushtiku5/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayushtiku5/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayushtiku5/subscriptions", "organizations_url": "https://api.github.com/users/ayushtiku5/orgs", "repos_url": "https://api.github.com/users/ayushtiku5/repos", "events_url": "https://api.github.com/users/ayushtiku5/events{/privacy}", "received_events_url": "https://api.github.com/users/ayushtiku5/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten I have made the required changes. Please review", "@patrickvonplaten I have made the required changes. Please review and merge" ]
1,601
1,603
1,603
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #7502 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7577/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7577", "html_url": "https://github.com/huggingface/transformers/pull/7577", "diff_url": "https://github.com/huggingface/transformers/pull/7577.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7577.patch", "merged_at": 1603090569000 }
https://api.github.com/repos/huggingface/transformers/issues/7576
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7576/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7576/comments
https://api.github.com/repos/huggingface/transformers/issues/7576/events
https://github.com/huggingface/transformers/issues/7576
714,777,173
MDU6SXNzdWU3MTQ3NzcxNzM=
7,576
Trainer evaluate returns empty dictionary
{ "login": "adamwawrzynski", "id": 19324675, "node_id": "MDQ6VXNlcjE5MzI0Njc1", "avatar_url": "https://avatars.githubusercontent.com/u/19324675?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adamwawrzynski", "html_url": "https://github.com/adamwawrzynski", "followers_url": "https://api.github.com/users/adamwawrzynski/followers", "following_url": "https://api.github.com/users/adamwawrzynski/following{/other_user}", "gists_url": "https://api.github.com/users/adamwawrzynski/gists{/gist_id}", "starred_url": "https://api.github.com/users/adamwawrzynski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adamwawrzynski/subscriptions", "organizations_url": "https://api.github.com/users/adamwawrzynski/orgs", "repos_url": "https://api.github.com/users/adamwawrzynski/repos", "events_url": "https://api.github.com/users/adamwawrzynski/events{/privacy}", "received_events_url": "https://api.github.com/users/adamwawrzynski/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "You did not provide any metrics to your `Trainer` and it looks like your dataset has no labels. `Trainer.evaluate` thus can't return anything useful." ]
1,601
1,606
1,606
NONE
null
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-5.4.0-45-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Trainer: @sgugger ## Information Model I am using RoBERT: The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. `python3 finetune_roberta.py -m result/ -d dataset.txt -t dataset.txt` ```python from transformers import (BertForNextSentencePrediction, BertTokenizer, RobertaModel, RobertaTokenizer, Trainer, TrainingArguments) from transformers.data.datasets.language_modeling import TextDatasetForNextSentencePrediction from transformers.data.data_collator import DataCollatorForNextSentencePrediction from argparse import ArgumentParser def parse_args(): parser = ArgumentParser("Fine-tune RoBERTa in Next Sentence Prediction.") parser.add_argument("-m", "--model_path", dest="model_path", required=True, help="Path to RoBERTa model.") parser.add_argument("-d", "--dataset_path", dest="dataset_path", required=True, help="Path to dataset.") parser.add_argument("-t", "--test_dataset_path", dest="test_dataset_path", required=True, help="Path to test dataset.") args = parser.parse_args() return args if __name__ == "__main__": args = parse_args() tokenizer = RobertaTokenizer.from_pretrained(args.model_path) finetune_model = BertForNextSentencePrediction.from_pretrained(args.model_path) training_args = TrainingArguments( output_dir=args.output_path, num_train_epochs=3, per_device_train_batch_size=1, per_device_eval_batch_size=1, warmup_steps=500, weight_decay=0.01, logging_dir='./logs', ) data_collator = DataCollatorForNextSentencePrediction( tokenizer=tokenizer, mlm=False, block_size=512, nsp_probability=0.5, ) train_dataset = TextDatasetForNextSentencePrediction( tokenizer=tokenizer, file_path=args.dataset_path, block_size=512, ) test_dataset = TextDatasetForNextSentencePrediction( tokenizer=tokenizer, file_path=args.test_dataset_path, block_size=512, ) trainer = Trainer( model=finetune_model, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset, data_collator=data_collator, ) print(trainer.evaluate(test_dataset)) ``` Output in terminal: ```bash python3 finetune_roberta.py -m result/ -d dataset_fixed_alior.txt -t dataset_fixed_alior.txt -o results_test/ Special tokens have been added in the vocabulary, make sure the associated word emebedding are fine-tuned or trained. Some weights of the model checkpoint at result/ were not used when initializing RobertaModel: ['bert.embeddings.position_ids', 'bert.embeddings.word_embeddings.weight', 'bert.embeddings.position_embeddings.weight', 'bert.embeddings.token_type_embeddings.weight', 'bert.embeddings.LayerNorm.weight', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.0.attention.self.query.weight', 'bert.encoder.layer.0.attention.self.query.bias', 'bert.encoder.layer.0.attention.self.key.weight', 'bert.encoder.layer.0.attention.self.key.bias', 'bert.encoder.layer.0.attention.self.value.weight', 'bert.encoder.layer.0.attention.self.value.bias', 'bert.encoder.layer.0.attention.output.dense.weight', 'bert.encoder.layer.0.attention.output.dense.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.output.LayerNorm.bias', 'bert.encoder.layer.0.intermediate.dense.weight', 'bert.encoder.layer.0.intermediate.dense.bias', 'bert.encoder.layer.0.output.dense.weight', 'bert.encoder.layer.0.output.dense.bias', 'bert.encoder.layer.0.output.LayerNorm.weight', 'bert.encoder.layer.0.output.LayerNorm.bias', 'bert.encoder.layer.1.attention.self.query.weight', 'bert.encoder.layer.1.attention.self.query.bias', 'bert.encoder.layer.1.attention.self.key.weight', 'bert.encoder.layer.1.attention.self.key.bias', 'bert.encoder.layer.1.attention.self.value.weight', 'bert.encoder.layer.1.attention.self.value.bias', 'bert.encoder.layer.1.attention.output.dense.weight', 'bert.encoder.layer.1.attention.output.dense.bias', 'bert.encoder.layer.1.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.attention.output.LayerNorm.bias', 'bert.encoder.layer.1.intermediate.dense.weight', 'bert.encoder.layer.1.intermediate.dense.bias', 'bert.encoder.layer.1.output.dense.weight', 'bert.encoder.layer.1.output.dense.bias', 'bert.encoder.layer.1.output.LayerNorm.weight', 'bert.encoder.layer.1.output.LayerNorm.bias', 'bert.encoder.layer.2.attention.self.query.weight', 'bert.encoder.layer.2.attention.self.query.bias', 'bert.encoder.layer.2.attention.self.key.weight', 'bert.encoder.layer.2.attention.self.key.bias', 'bert.encoder.layer.2.attention.self.value.weight', 'bert.encoder.layer.2.attention.self.value.bias', 'bert.encoder.layer.2.attention.output.dense.weight', 'bert.encoder.layer.2.attention.output.dense.bias', 'bert.encoder.layer.2.attention.output.LayerNorm.weight', 'bert.encoder.layer.2.attention.output.LayerNorm.bias', 'bert.encoder.layer.2.intermediate.dense.weight', 'bert.encoder.layer.2.intermediate.dense.bias', 'bert.encoder.layer.2.output.dense.weight', 'bert.encoder.layer.2.output.dense.bias', 'bert.encoder.layer.2.output.LayerNorm.weight', 'bert.encoder.layer.2.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.self.query.weight', 'bert.encoder.layer.3.attention.self.query.bias', 'bert.encoder.layer.3.attention.self.key.weight', 'bert.encoder.layer.3.attention.self.key.bias', 'bert.encoder.layer.3.attention.self.value.weight', 'bert.encoder.layer.3.attention.self.value.bias', 'bert.encoder.layer.3.attention.output.dense.weight', 'bert.encoder.layer.3.attention.output.dense.bias', 'bert.encoder.layer.3.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.attention.output.LayerNorm.bias', 'bert.encoder.layer.3.intermediate.dense.weight', 'bert.encoder.layer.3.intermediate.dense.bias', 'bert.encoder.layer.3.output.dense.weight', 'bert.encoder.layer.3.output.dense.bias', 'bert.encoder.layer.3.output.LayerNorm.weight', 'bert.encoder.layer.3.output.LayerNorm.bias', 'bert.encoder.layer.4.attention.self.query.weight', 'bert.encoder.layer.4.attention.self.query.bias', 'bert.encoder.layer.4.attention.self.key.weight', 'bert.encoder.layer.4.attention.self.key.bias', 'bert.encoder.layer.4.attention.self.value.weight', 'bert.encoder.layer.4.attention.self.value.bias', 'bert.encoder.layer.4.attention.output.dense.weight', 'bert.encoder.layer.4.attention.output.dense.bias', 'bert.encoder.layer.4.attention.output.LayerNorm.weight', 'bert.encoder.layer.4.attention.output.LayerNorm.bias', 'bert.encoder.layer.4.intermediate.dense.weight', 'bert.encoder.layer.4.intermediate.dense.bias', 'bert.encoder.layer.4.output.dense.weight', 'bert.encoder.layer.4.output.dense.bias', 'bert.encoder.layer.4.output.LayerNorm.weight', 'bert.encoder.layer.4.output.LayerNorm.bias', 'bert.encoder.layer.5.attention.self.query.weight', 'bert.encoder.layer.5.attention.self.query.bias', 'bert.encoder.layer.5.attention.self.key.weight', 'bert.encoder.layer.5.attention.self.key.bias', 'bert.encoder.layer.5.attention.self.value.weight', 'bert.encoder.layer.5.attention.self.value.bias', 'bert.encoder.layer.5.attention.output.dense.weight', 'bert.encoder.layer.5.attention.output.dense.bias', 'bert.encoder.layer.5.attention.output.LayerNorm.weight', 'bert.encoder.layer.5.attention.output.LayerNorm.bias', 'bert.encoder.layer.5.intermediate.dense.weight', 'bert.encoder.layer.5.intermediate.dense.bias', 'bert.encoder.layer.5.output.dense.weight', 'bert.encoder.layer.5.output.dense.bias', 'bert.encoder.layer.5.output.LayerNorm.weight', 'bert.encoder.layer.5.output.LayerNorm.bias', 'bert.encoder.layer.6.attention.self.query.weight', 'bert.encoder.layer.6.attention.self.query.bias', 'bert.encoder.layer.6.attention.self.key.weight', 'bert.encoder.layer.6.attention.self.key.bias', 'bert.encoder.layer.6.attention.self.value.weight', 'bert.encoder.layer.6.attention.self.value.bias', 'bert.encoder.layer.6.attention.output.dense.weight', 'bert.encoder.layer.6.attention.output.dense.bias', 'bert.encoder.layer.6.attention.output.LayerNorm.weight', 'bert.encoder.layer.6.attention.output.LayerNorm.bias', 'bert.encoder.layer.6.intermediate.dense.weight', 'bert.encoder.layer.6.intermediate.dense.bias', 'bert.encoder.layer.6.output.dense.weight', 'bert.encoder.layer.6.output.dense.bias', 'bert.encoder.layer.6.output.LayerNorm.weight', 'bert.encoder.layer.6.output.LayerNorm.bias', 'bert.encoder.layer.7.attention.self.query.weight', 'bert.encoder.layer.7.attention.self.query.bias', 'bert.encoder.layer.7.attention.self.key.weight', 'bert.encoder.layer.7.attention.self.key.bias', 'bert.encoder.layer.7.attention.self.value.weight', 'bert.encoder.layer.7.attention.self.value.bias', 'bert.encoder.layer.7.attention.output.dense.weight', 'bert.encoder.layer.7.attention.output.dense.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.weight', 'bert.encoder.layer.7.attention.output.LayerNorm.bias', 'bert.encoder.layer.7.intermediate.dense.weight', 'bert.encoder.layer.7.intermediate.dense.bias', 'bert.encoder.layer.7.output.dense.weight', 'bert.encoder.layer.7.output.dense.bias', 'bert.encoder.layer.7.output.LayerNorm.weight', 'bert.encoder.layer.7.output.LayerNorm.bias', 'bert.encoder.layer.8.attention.self.query.weight', 'bert.encoder.layer.8.attention.self.query.bias', 'bert.encoder.layer.8.attention.self.key.weight', 'bert.encoder.layer.8.attention.self.key.bias', 'bert.encoder.layer.8.attention.self.value.weight', 'bert.encoder.layer.8.attention.self.value.bias', 'bert.encoder.layer.8.attention.output.dense.weight', 'bert.encoder.layer.8.attention.output.dense.bias', 'bert.encoder.layer.8.attention.output.LayerNorm.weight', 'bert.encoder.layer.8.attention.output.LayerNorm.bias', 'bert.encoder.layer.8.intermediate.dense.weight', 'bert.encoder.layer.8.intermediate.dense.bias', 'bert.encoder.layer.8.output.dense.weight', 'bert.encoder.layer.8.output.dense.bias', 'bert.encoder.layer.8.output.LayerNorm.weight', 'bert.encoder.layer.8.output.LayerNorm.bias', 'bert.encoder.layer.9.attention.self.query.weight', 'bert.encoder.layer.9.attention.self.query.bias', 'bert.encoder.layer.9.attention.self.key.weight', 'bert.encoder.layer.9.attention.self.key.bias', 'bert.encoder.layer.9.attention.self.value.weight', 'bert.encoder.layer.9.attention.self.value.bias', 'bert.encoder.layer.9.attention.output.dense.weight', 'bert.encoder.layer.9.attention.output.dense.bias', 'bert.encoder.layer.9.attention.output.LayerNorm.weight', 'bert.encoder.layer.9.attention.output.LayerNorm.bias', 'bert.encoder.layer.9.intermediate.dense.weight', 'bert.encoder.layer.9.intermediate.dense.bias', 'bert.encoder.layer.9.output.dense.weight', 'bert.encoder.layer.9.output.dense.bias', 'bert.encoder.layer.9.output.LayerNorm.weight', 'bert.encoder.layer.9.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.self.query.weight', 'bert.encoder.layer.10.attention.self.query.bias', 'bert.encoder.layer.10.attention.self.key.weight', 'bert.encoder.layer.10.attention.self.key.bias', 'bert.encoder.layer.10.attention.self.value.weight', 'bert.encoder.layer.10.attention.self.value.bias', 'bert.encoder.layer.10.attention.output.dense.weight', 'bert.encoder.layer.10.attention.output.dense.bias', 'bert.encoder.layer.10.attention.output.LayerNorm.weight', 'bert.encoder.layer.10.attention.output.LayerNorm.bias', 'bert.encoder.layer.10.intermediate.dense.weight', 'bert.encoder.layer.10.intermediate.dense.bias', 'bert.encoder.layer.10.output.dense.weight', 'bert.encoder.layer.10.output.dense.bias', 'bert.encoder.layer.10.output.LayerNorm.weight', 'bert.encoder.layer.10.output.LayerNorm.bias', 'bert.encoder.layer.11.attention.self.query.weight', 'bert.encoder.layer.11.attention.self.query.bias', 'bert.encoder.layer.11.attention.self.key.weight', 'bert.encoder.layer.11.attention.self.key.bias', 'bert.encoder.layer.11.attention.self.value.weight', 'bert.encoder.layer.11.attention.self.value.bias', 'bert.encoder.layer.11.attention.output.dense.weight', 'bert.encoder.layer.11.attention.output.dense.bias', 'bert.encoder.layer.11.attention.output.LayerNorm.weight', 'bert.encoder.layer.11.attention.output.LayerNorm.bias', 'bert.encoder.layer.11.intermediate.dense.weight', 'bert.encoder.layer.11.intermediate.dense.bias', 'bert.encoder.layer.11.output.dense.weight', 'bert.encoder.layer.11.output.dense.bias', 'bert.encoder.layer.11.output.LayerNorm.weight', 'bert.encoder.layer.11.output.LayerNorm.bias', 'bert.pooler.dense.weight', 'bert.pooler.dense.bias', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias'] - This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaModel were not initialized from the model checkpoint at result/ and are newly initialized: ['embeddings.word_embeddings.weight', 'embeddings.position_embeddings.weight', 'embeddings.token_type_embeddings.weight', 'embeddings.LayerNorm.weight', 'embeddings.LayerNorm.bias', 'encoder.layer.0.attention.self.query.weight', 'encoder.layer.0.attention.self.query.bias', 'encoder.layer.0.attention.self.key.weight', 'encoder.layer.0.attention.self.key.bias', 'encoder.layer.0.attention.self.value.weight', 'encoder.layer.0.attention.self.value.bias', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.0.attention.output.dense.bias', 'encoder.layer.0.attention.output.LayerNorm.weight', 'encoder.layer.0.attention.output.LayerNorm.bias', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.0.output.dense.weight', 'encoder.layer.0.output.dense.bias', 'encoder.layer.0.output.LayerNorm.weight', 'encoder.layer.0.output.LayerNorm.bias', 'encoder.layer.1.attention.self.query.weight', 'encoder.layer.1.attention.self.query.bias', 'encoder.layer.1.attention.self.key.weight', 'encoder.layer.1.attention.self.key.bias', 'encoder.layer.1.attention.self.value.weight', 'encoder.layer.1.attention.self.value.bias', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.1.attention.output.LayerNorm.weight', 'encoder.layer.1.attention.output.LayerNorm.bias', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.1.output.dense.weight', 'encoder.layer.1.output.dense.bias', 'encoder.layer.1.output.LayerNorm.weight', 'encoder.layer.1.output.LayerNorm.bias', 'encoder.layer.2.attention.self.query.weight', 'encoder.layer.2.attention.self.query.bias', 'encoder.layer.2.attention.self.key.weight', 'encoder.layer.2.attention.self.key.bias', 'encoder.layer.2.attention.self.value.weight', 'encoder.layer.2.attention.self.value.bias', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.2.attention.output.LayerNorm.bias', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.2.output.dense.weight', 'encoder.layer.2.output.dense.bias', 'encoder.layer.2.output.LayerNorm.weight', 'encoder.layer.2.output.LayerNorm.bias', 'encoder.layer.3.attention.self.query.weight', 'encoder.layer.3.attention.self.query.bias', 'encoder.layer.3.attention.self.key.weight', 'encoder.layer.3.attention.self.key.bias', 'encoder.layer.3.attention.self.value.weight', 'encoder.layer.3.attention.self.value.bias', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.3.attention.output.LayerNorm.weight', 'encoder.layer.3.attention.output.LayerNorm.bias', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.3.output.dense.weight', 'encoder.layer.3.output.dense.bias', 'encoder.layer.3.output.LayerNorm.weight', 'encoder.layer.3.output.LayerNorm.bias', 'encoder.layer.4.attention.self.query.weight', 'encoder.layer.4.attention.self.query.bias', 'encoder.layer.4.attention.self.key.weight', 'encoder.layer.4.attention.self.key.bias', 'encoder.layer.4.attention.self.value.weight', 'encoder.layer.4.attention.self.value.bias', 'encoder.layer.4.attention.output.dense.weight', 'encoder.layer.4.attention.output.dense.bias', 'encoder.layer.4.attention.output.LayerNorm.weight', 'encoder.layer.4.attention.output.LayerNorm.bias', 'encoder.layer.4.intermediate.dense.weight', 'encoder.layer.4.intermediate.dense.bias', 'encoder.layer.4.output.dense.weight', 'encoder.layer.4.output.dense.bias', 'encoder.layer.4.output.LayerNorm.weight', 'encoder.layer.4.output.LayerNorm.bias', 'encoder.layer.5.attention.self.query.weight', 'encoder.layer.5.attention.self.query.bias', 'encoder.layer.5.attention.self.key.weight', 'encoder.layer.5.attention.self.key.bias', 'encoder.layer.5.attention.self.value.weight', 'encoder.layer.5.attention.self.value.bias', 'encoder.layer.5.attention.output.dense.weight', 'encoder.layer.5.attention.output.dense.bias', 'encoder.layer.5.attention.output.LayerNorm.weight', 'encoder.layer.5.attention.output.LayerNorm.bias', 'encoder.layer.5.intermediate.dense.weight', 'encoder.layer.5.intermediate.dense.bias', 'encoder.layer.5.output.dense.weight', 'encoder.layer.5.output.dense.bias', 'encoder.layer.5.output.LayerNorm.weight', 'encoder.layer.5.output.LayerNorm.bias', 'encoder.layer.6.attention.self.query.weight', 'encoder.layer.6.attention.self.query.bias', 'encoder.layer.6.attention.self.key.weight', 'encoder.layer.6.attention.self.key.bias', 'encoder.layer.6.attention.self.value.weight', 'encoder.layer.6.attention.self.value.bias', 'encoder.layer.6.attention.output.dense.weight', 'encoder.layer.6.attention.output.dense.bias', 'encoder.layer.6.attention.output.LayerNorm.weight', 'encoder.layer.6.attention.output.LayerNorm.bias', 'encoder.layer.6.intermediate.dense.weight', 'encoder.layer.6.intermediate.dense.bias', 'encoder.layer.6.output.dense.weight', 'encoder.layer.6.output.dense.bias', 'encoder.layer.6.output.LayerNorm.weight', 'encoder.layer.6.output.LayerNorm.bias', 'encoder.layer.7.attention.self.query.weight', 'encoder.layer.7.attention.self.query.bias', 'encoder.layer.7.attention.self.key.weight', 'encoder.layer.7.attention.self.key.bias', 'encoder.layer.7.attention.self.value.weight', 'encoder.layer.7.attention.self.value.bias', 'encoder.layer.7.attention.output.dense.weight', 'encoder.layer.7.attention.output.dense.bias', 'encoder.layer.7.attention.output.LayerNorm.weight', 'encoder.layer.7.attention.output.LayerNorm.bias', 'encoder.layer.7.intermediate.dense.weight', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.7.output.dense.weight', 'encoder.layer.7.output.dense.bias', 'encoder.layer.7.output.LayerNorm.weight', 'encoder.layer.7.output.LayerNorm.bias', 'encoder.layer.8.attention.self.query.weight', 'encoder.layer.8.attention.self.query.bias', 'encoder.layer.8.attention.self.key.weight', 'encoder.layer.8.attention.self.key.bias', 'encoder.layer.8.attention.self.value.weight', 'encoder.layer.8.attention.self.value.bias', 'encoder.layer.8.attention.output.dense.weight', 'encoder.layer.8.attention.output.dense.bias', 'encoder.layer.8.attention.output.LayerNorm.weight', 'encoder.layer.8.attention.output.LayerNorm.bias', 'encoder.layer.8.intermediate.dense.weight', 'encoder.layer.8.intermediate.dense.bias', 'encoder.layer.8.output.dense.weight', 'encoder.layer.8.output.dense.bias', 'encoder.layer.8.output.LayerNorm.weight', 'encoder.layer.8.output.LayerNorm.bias', 'encoder.layer.9.attention.self.query.weight', 'encoder.layer.9.attention.self.query.bias', 'encoder.layer.9.attention.self.key.weight', 'encoder.layer.9.attention.self.key.bias', 'encoder.layer.9.attention.self.value.weight', 'encoder.layer.9.attention.self.value.bias', 'encoder.layer.9.attention.output.dense.weight', 'encoder.layer.9.attention.output.dense.bias', 'encoder.layer.9.attention.output.LayerNorm.weight', 'encoder.layer.9.attention.output.LayerNorm.bias', 'encoder.layer.9.intermediate.dense.weight', 'encoder.layer.9.intermediate.dense.bias', 'encoder.layer.9.output.dense.weight', 'encoder.layer.9.output.dense.bias', 'encoder.layer.9.output.LayerNorm.weight', 'encoder.layer.9.output.LayerNorm.bias', 'encoder.layer.10.attention.self.query.weight', 'encoder.layer.10.attention.self.query.bias', 'encoder.layer.10.attention.self.key.weight', 'encoder.layer.10.attention.self.key.bias', 'encoder.layer.10.attention.self.value.weight', 'encoder.layer.10.attention.self.value.bias', 'encoder.layer.10.attention.output.dense.weight', 'encoder.layer.10.attention.output.dense.bias', 'encoder.layer.10.attention.output.LayerNorm.weight', 'encoder.layer.10.attention.output.LayerNorm.bias', 'encoder.layer.10.intermediate.dense.weight', 'encoder.layer.10.intermediate.dense.bias', 'encoder.layer.10.output.dense.weight', 'encoder.layer.10.output.dense.bias', 'encoder.layer.10.output.LayerNorm.weight', 'encoder.layer.10.output.LayerNorm.bias', 'encoder.layer.11.attention.self.query.weight', 'encoder.layer.11.attention.self.query.bias', 'encoder.layer.11.attention.self.key.weight', 'encoder.layer.11.attention.self.key.bias', 'encoder.layer.11.attention.self.value.weight', 'encoder.layer.11.attention.self.value.bias', 'encoder.layer.11.attention.output.dense.weight', 'encoder.layer.11.attention.output.dense.bias', 'encoder.layer.11.attention.output.LayerNorm.weight', 'encoder.layer.11.attention.output.LayerNorm.bias', 'encoder.layer.11.intermediate.dense.weight', 'encoder.layer.11.intermediate.dense.bias', 'encoder.layer.11.output.dense.weight', 'encoder.layer.11.output.dense.bias', 'encoder.layer.11.output.LayerNorm.weight', 'encoder.layer.11.output.LayerNorm.bias', 'pooler.dense.weight', 'pooler.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Evaluation: 0%| | 0/7385 [00:00<?, ?it/s] ... [[[206, 3, 3, 47, 3, 4224, 163, 3, 2197, 3, 3, 3, 91, 324, 3, 2920, 3, 3, 47, 3, 3, 170, 21822, 3, 3, 14139, 3, 301, 3, 1210, 1692, 3, 67, 3, 25690, 358, 3, 258, 84, 3, 447, 3, 2980, 3, 43750, 3, 7388, 3, 15679, 4121, 3, 254, 3, 15390, 223, 21822, 3, 12135, 3, 269, 3, 254, 3, 10577, 3, 88, 611, 90, 3, 4729, 15616, 3, 92, 3, 336, 163, 21822, 3, 526, 3, 269, 3, 98, 3048, 3, 4224, 3, 215, 21822, 3, 88, 358, 3, 88, 611, 84], [254, 3, 67, 3, 88, 611, 74, 21822, 3, 3, 88, 611, 74, 21822, 3, 3, 206, 3, 3, 2048, 74, 21822, 3, 3, 92, 3, 336, 163, 21822, 3, 526]], [[254, 3, 67, 3, 88, 611, 74, 21822, 3, 3, 88, 611, 74, 21822, 3, 3, 206, 3, 3, 2048, 74, 21822, 3, 3, 92, 3, 336, 163, 21822, 3, 526], [1598, 3, 3, 56, 2726, 3, 5704, 3, 2980, 4847, 3, 336, 826, 237, 3, 4224, 163, 3, 254, 3, 10577, 3, 25690, 3, 3412, 23962, 567]], [[301, 6393, 324, 3, 36461, 118, 238, 3, 56, 3, 269, 3, 26672, 84, 3, 93, 3, 169, 210, 3, 3968, 368, 3, 7388, 3, 477, 3, 3, 91, 3, 301, 21176, 3, 94, 3, 94, 3, 301, 21176, 3, 7388, 3, 67, 3, 26605, 110, 21822, 3, 3, 15191, 114, 21822, 3, 3, 3119, 3, 2197, 3, 7208, 3, 95, 661, 3, 227, 3, 3, 3, 2554, 3, 2197, 4847, 3, 2197, 21822, 3, 3, 477, 3, 3, 9746, 3, 4224, 21822, 3, 3, 91, 41872, 311, 3, 1068, 25393, 3, 353, 3, 23635, 3, 75, 3, 1146, 173, 3, 3, 5379, 3, 237, 2736, 3, 3, 93, 3, 1068, 105, 818, 3, 3, 3, 91, 324, 3, 258, 90, 3, 324, 3, 3, 56, 3, 301, 3, 1210, 1692, 3, 25690, 431, 3, 3, 3, 91, 324, 3, 94, 3, 1844, 3, 3, 3, 1146, 163, 21822, 3, 3, 3, 56, 3, 3, 56, 3, 2197, 3, 6073, 3, 3, 29623], [3112, 130, 3, 284, 3, 5704, 3, 94, 3, 95, 661, 3, 2197, 4847]], [[3112, 130, 3, 284, 3, 5704, 3, 94, 3, 95, 661, 3, 2197, 4847], [301, 6393, 324, 3, 36461, 118, 238, 3, 93, 3, 2197, 938, 3, 3190, 75, 91, 3, 388, 3, 336, 3, 4224, 110, 21822, 3, 3, 76, 2018, 3, 215, 518, 74, 3, 3, 93, 88, 21822, 3, 3, 92, 3, 848, 535, 614, 3, 4516, 25307, 3, 353, 3, 95, 94, 3, 10032, 324, 3, 317, 21822, 3, 3, 575, 3, 4224, 3, 92, 7278, 3, 3, 47, 3, 93, 3, 3313, 3, 88, 9813, 1964, 3, 477, 3, 3, 67, 3, 317, 21822, 3, 3, 6494, 114, 21822, 3, 3, 488, 664, 2197, 2518, 21822, 3, 3, 388, 3, 95, 661, 3, 477, 3, 3, 9746, 3, 4224, 21822, 3, 3, 93, 8714, 892, 3, 2197, 12334, 3, 93, 3079, 1451, 3, 15227]], [[301, 6393, 324, 3, 36461, 118, 238, 3, 93, 3, 2197, 938, 3, 3190, 75, 91, 3, 388, 3, 336, 3, 4224, 110, 21822, 3, 3, 76, 2018, 3, 215, 518, 74, 3, 3, 93, 88, 21822, 3, 3, 92, 3, 848, 535, 614, 3, 4516, 25307, 3, 353, 3, 95, 94, 3, 10032, 324, 3, 317, 21822, 3, 3, 575, 3, 4224, 3, 92, 7278, 3, 3, 47, 3, 93, 3, 3313, 3, 88, 9813, 1964, 3, 477, 3, 3, 67, 3, 317, 21822, 3, 3, 6494, 114, 21822, 3, 3, 488, 664, 2197, 2518, 21822, 3, 3, 388, 3, 95, 661, 3, 477, 3, 3, 9746, 3, 4224, 21822, 3, 3, 93, 8714, 892, 3, 2197, 12334, 3, 93, 3079, 1451, 3, 15227], [1569, 359, 3, 404, 3, 3, 3, 7850, 219, 3, 92, 3, 3313, 3, 2854, 81, 3, 3412, 1459, 567, 3, 3412, 1459, 567, 3, 8142, 3, 3412, 1459, 567, 3, 4516, 25307, 98, 21822, 3, 3, 92, 3, 19638, 3, 3968, 176, 3, 2197, 3, 93, 3120, 95, 207, 3, 3, 238, 3, 239, 1881, 132, 3, 10577, 3, 299, 308, 3350, 360, 3, 75, 3, 95, 661, 3, 353, 3, 1068, 7183, 200, 3, 4761, 3, 22443, 3, 11031, 3, 3, 490, 21822, 3, 3, 169, 5252, 4082, 21822, 3, 3, 269, 3, 477, 149, 21822, 3, 3, 227, 3190, 3, 26605, 110, 21822, 3, 3, 383, 98, 21822, 3, 21822, 3, 3, 2197, 3, 2197, 3, 301, 1210, 85, 21822, 3, 3, 1611, 3, 3, 3, 95, 3, 3, 215, 21822, 3, 3, 92, 4655, 3, 47, 3, 575, 3, 12992, 3, 353, 16035, 95, 3, 2923, 21530, 3, 17248, 8268, 3, 47, 24072, 3, 4224, 86, 3, 2197, 3, 10577, 3, 45820, 67, 3, 17248, 5766, 3, 570, 21822, 3, 8325, 21822, 3, 3, 75, 3, 11517, 6412, 21822, 3, 3, 169, 5252, 4082, 21822, 3, 3, 47, 3, 1145, 3, 3, 131, 3322, 3, 1709, 3, 16581, 544, 3, 1328, 683, 3, 388, 3, 1210, 85, 21822, 3, 3, 22443, 3, 27921, 173, 21822, 3, 3, 301, 94, 183, 3, 3, 3, 301, 388, 215, 3, 3, 56, 3, 317, 21822, 3, 3, 21592, 3, 477, 3, 3, 9746, 3, 4224, 163, 3, 67, 3, 46, 2728, 943, 3, 43750, 3, 29317, 5571, 3, 4224, 163, 3, 47, 3482, 84, 3, 2777, 3, 88, 21822, 3, 380, 3, 7850, 4049, 3, 353, 14227, 78, 3, 3, 4224, 2139, 88, 3, 3, 93, 3, 92, 3, 3968, 176, 3, 1709, 3, 227, 3, 3, 3, 488, 664, 19586, 28602, 3, 169, 210, 3, 15390, 223, 21822, 3, 1326, 98, 21822, 3, 206, 3, 673, 3, 1146, 173, 3, 3, 5379, 3, 591, 3, 258, 84, 3, 138, 546, 3, 3, 238, 3, 383, 679, 90, 21822, 3, 3, 269, 3, 1617, 3, 3, 258, 3, 4224, 3, 92, 3, 1642, 3, 2774, 261, 3, 324, 3, 3, 56, 324, 3, 4761, 3, 22443, 3, 301, 5777, 67]], [[1569, 359, 3, 404, 3, 3, 3, 7850, 219, 3, 92, 3, 3313, 3, 2854, 81, 3, 3412, 1459, 567, 3, 3412, 1459, 567, 3, 8142, 3, 3412, 1459, 567, 3, 4516, 25307, 98, 21822, 3, 3, 92, 3, 19638, 3, 3968, 176, 3, 2197, 3, 93, 3120, 95, 207, 3, 3, 238, 3, 239, 1881, 132, 3, 10577, 3, 299, 308, 3350, 360, 3, 75, 3, 95, 661, 3, 353, 3, 1068, 7183, 200, 3, 4761, 3, 22443, 3, 11031, 3, 3, 490, 21822, 3, 3, 169, 5252, 4082, 21822, 3, 3, 269, 3, 477, 149, 21822, 3, 3, 227, 3190, 3, 26605, 110, 21822, 3, 3, 383, 98, 21822, 3, 21822, 3, 3, 2197, 3, 2197, 3, 301, 1210, 85, 21822, 3, 3, 1611, 3, 3, 3, 95, 3, 3, 215, 21822, 3, 3, 92, 4655, 3, 47, 3, 575, 3, 12992, 3, 353, 16035, 95, 3, 2923, 21530, 3, 17248, 8268, 3, 47, 24072, 3, 4224, 86, 3, 2197, 3, 10577, 3, 45820, 67, 3, 17248, 5766, 3, 570, 21822, 3, 8325, 21822, 3, 3, 75, 3, 11517, 6412, 21822, 3, 3, 169, 5252, 4082, 21822, 3, 3, 47, 3, 1145, 3, 3, 131, 3322, 3, 1709, 3, 16581, 544, 3, 1328, 683, 3, 388, 3, 1210, 85, 21822, 3, 3, 22443, 3, 27921, 173, 21822, 3, 3, 301, 94, 183, 3, 3, 3, 301, 388, 215, 3, 3, 56, 3, 317, 21822, 3, 3, 21592, 3, 477, 3, 3, 9746, 3, 4224, 163, 3, 67, 3, 46, 2728, 943, 3, 43750, 3, 29317, 5571, 3, 4224, 163, 3, 47, 3482, 84, 3, 2777, 3, 88, 21822, 3, 380, 3, 7850, 4049, 3, 353, 14227, 78, 3, 3, 4224, 2139, 88, 3, 3, 93, 3, 92, 3, 3968, 176, 3, 1709, 3, 227, 3, 3, 3, 488, 664, 19586, 28602, 3, 169, 210, 3, 15390, 223, 21822, 3, 1326, 98, 21822, 3, 206, 3, 673, 3, 1146, 173, 3, 3, 5379, 3, 591, 3, 258, 84, 3, 138, 546, 3, 3, 238, 3, 383, 679, 90, 21822, 3, 3, 269, 3, 1617, 3, 3, 258, 3, 4224, 3, 92, 3, 1642, 3, 2774, 261, 3, 324, 3, 3, 56, 324, 3, 4761, 3, 22443, 3, 301, 5777, 67], [477, 3, 3, 91, 3, 94, 3, 2980, 3, 67, 3, 3190, 75, 91, 3, 254, 3, 7208, 3, 67, 3, 3190, 75, 91, 3, 94, 3, 94, 3, 94, 3, 308, 4540, 3, 383, 43129, 252, 3, 501]], [[477, 3, 3, 91, 3, 94, 3, 2980, 3, 67, 3, 3190, 75, 91, 3, 254, 3, 7208, 3, 67, 3, 3190, 75, 91, 3, 94, 3, 94, 3, 94, 3, 308, 4540, 3, 383, 43129, 252, 3, 501], [7388, 3, 36461, 118, 238, 3, 2932, 3, 9216, 3, 3, 33118, 3, 1500, 19586, 23671, 3, 673, 3, 4592, 3, 3, 2447, 3, 353, 14227, 132, 3, 94, 3, 93, 1413, 855, 21822, 3, 3, 93, 3, 9486, 21822, 3]], []] Evaluation: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7385/7385 [37:58<00:00, 3.24it/s] {} ``` ## Expected behavior Trainer returns dictionary with statistics about model performance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7576/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7575
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7575/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7575/comments
https://api.github.com/repos/huggingface/transformers/issues/7575/events
https://github.com/huggingface/transformers/pull/7575
714,750,560
MDExOlB1bGxSZXF1ZXN0NDk3NzgwMjgz
7,575
docs(pretrained_models): fix num parameters
{ "login": "amineabdaoui", "id": 17952908, "node_id": "MDQ6VXNlcjE3OTUyOTA4", "avatar_url": "https://avatars.githubusercontent.com/u/17952908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amineabdaoui", "html_url": "https://github.com/amineabdaoui", "followers_url": "https://api.github.com/users/amineabdaoui/followers", "following_url": "https://api.github.com/users/amineabdaoui/following{/other_user}", "gists_url": "https://api.github.com/users/amineabdaoui/gists{/gist_id}", "starred_url": "https://api.github.com/users/amineabdaoui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amineabdaoui/subscriptions", "organizations_url": "https://api.github.com/users/amineabdaoui/orgs", "repos_url": "https://api.github.com/users/amineabdaoui/repos", "events_url": "https://api.github.com/users/amineabdaoui/events{/privacy}", "received_events_url": "https://api.github.com/users/amineabdaoui/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,676
1,601
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR corrects the number of parameters of pretrained BERT based models presented in the documentation. Sometimes the difference between a given model and its pairs is important. For instance: `bert-base-uncased` has **110M parameters** but `bert-base-multilingual-cased` has more than **178M parameters**, even if both models share the same architecture (12-layers, 768-hidden, 12-heads). The difference is due to the vocabulary size: `bert-base-uncased` uses a vocabulary of **30k** entries while `bert-base-multilingual-cased` uses a vocabulary of **119k** entries. To compute the number of parameters: ``` python from transformers import AutoModelForMaskedLM bert_base = AutoModelForMaskedLM.from_pretrained('bert-base-uncased') print(bert_base.num_parameters()) bert_multiling = AutoModelForMaskedLM.from_pretrained('bert-base-multilingual-cased') print(bert_multiling.num_parameters()) ``` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. Particularly: @LysandreJik and @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7575/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7575/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7575", "html_url": "https://github.com/huggingface/transformers/pull/7575", "diff_url": "https://github.com/huggingface/transformers/pull/7575.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7575.patch", "merged_at": 1601898657000 }
https://api.github.com/repos/huggingface/transformers/issues/7574
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7574/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7574/comments
https://api.github.com/repos/huggingface/transformers/issues/7574/events
https://github.com/huggingface/transformers/issues/7574
714,746,805
MDU6SXNzdWU3MTQ3NDY4MDU=
7,574
Some weights of GPT2DoubleHeadsModel were not initialized from the model checkpoint at gpt2 and are newly initialized
{ "login": "ZorrowHu", "id": 16571479, "node_id": "MDQ6VXNlcjE2NTcxNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/16571479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZorrowHu", "html_url": "https://github.com/ZorrowHu", "followers_url": "https://api.github.com/users/ZorrowHu/followers", "following_url": "https://api.github.com/users/ZorrowHu/following{/other_user}", "gists_url": "https://api.github.com/users/ZorrowHu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZorrowHu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZorrowHu/subscriptions", "organizations_url": "https://api.github.com/users/ZorrowHu/orgs", "repos_url": "https://api.github.com/users/ZorrowHu/repos", "events_url": "https://api.github.com/users/ZorrowHu/events{/privacy}", "received_events_url": "https://api.github.com/users/ZorrowHu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! I recommend you read [this doc](https://huggingface.co/transformers/task_summary.html) first to get an understanding of different tasks.\r\n\r\nWhat the warning you got means:\r\n\r\n- The model checkpoint (`\"gpt2\"`) was trained on a specific task (here, causal language modelling, or CLM)\r\n- You're loading that checkpoint in an architecture that has an additional head on top of it. This means there are a few more layers on top of the existing model.\r\n- The warning tells you: The base model (the GPT-2 architecture) is correctly initialized from the checkpoint. The additional head **is not**.\r\n- It cannot be initialized from that checkpoint as the multiple-choice head requires to be trained on a multiple-choice task. The CLM task mentioned earlier doesn't require this head and doesn't train it.\r\n- If you want to leverage that checkpoint with multiple-choice, it means that you should train these few layers on a multiple-choice task. Similarly to sequence classification or token classification, there are multiple different multiple-choice tasks, so you should find/create a dataset close to your use-case. You can find an example script showcasing that [here](https://github.com/huggingface/transformers/tree/master/examples/multiple-choice).\r\n\r\nI hope I helped answer your queries. Feel free to re-open if you have additional questions." ]
1,601
1,601
1,601
NONE
null
# ❓ Questions & Help Hi, I'm new to gpt2 and also this project! I was trying to run an example in the tutorials **https://huggingface.co/transformers/model_doc/gpt2.html** as follows: ``` import torch from transformers import GPT2Tokenizer, GPT2DoubleHeadsModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2DoubleHeadsModel.from_pretrained('gpt2', return_dict=True) # Add a [CLS] to the vocabulary (we should train it also!) num_added_tokens = tokenizer.add_special_tokens({'cls_token': '[CLS]'}) embedding_layer = model.resize_token_embeddings(len(tokenizer)) # Update the model embeddings with the new vocabulary size choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"] encoded_choices = [tokenizer.encode(s) for s in choices] cls_token_location = [tokens.index(tokenizer.cls_token_id) for tokens in encoded_choices] input_ids = torch.tensor(encoded_choices).unsqueeze(0) # Batch size: 1, number of choices: 2 mc_token_ids = torch.tensor([cls_token_location]) # Batch size: 1 outputs = model(input_ids, mc_token_ids=mc_token_ids) lm_logits = outputs.logits mc_logits = outputs.mc_logits ``` Then I got errors as below: > Some weights of GPT2DoubleHeadsModel were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight', 'multiple_choice_head.summary.weight', 'multiple_choice_head.summary.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. **https://github.com/huggingface/transformers/issues/6667** I've looked it up and find somebody with the same question as me. But I still got confused about what and how I can do to " fine-tune my model on a multiple-choice task". Maybe it's a dumb question though, but I still want to know how to make it work!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7574/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7573
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7573/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7573/comments
https://api.github.com/repos/huggingface/transformers/issues/7573/events
https://github.com/huggingface/transformers/pull/7573
714,704,706
MDExOlB1bGxSZXF1ZXN0NDk3NzQyMDQ3
7,573
[model_card] bert-base-5lang-cased
{ "login": "amineabdaoui", "id": 17952908, "node_id": "MDQ6VXNlcjE3OTUyOTA4", "avatar_url": "https://avatars.githubusercontent.com/u/17952908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amineabdaoui", "html_url": "https://github.com/amineabdaoui", "followers_url": "https://api.github.com/users/amineabdaoui/followers", "following_url": "https://api.github.com/users/amineabdaoui/following{/other_user}", "gists_url": "https://api.github.com/users/amineabdaoui/gists{/gist_id}", "starred_url": "https://api.github.com/users/amineabdaoui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amineabdaoui/subscriptions", "organizations_url": "https://api.github.com/users/amineabdaoui/orgs", "repos_url": "https://api.github.com/users/amineabdaoui/repos", "events_url": "https://api.github.com/users/amineabdaoui/events{/privacy}", "received_events_url": "https://api.github.com/users/amineabdaoui/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,601
1,602
1,602
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds the model card of [amine/bert-base-5lang-cased](https://huggingface.co/amine/bert-base-5lang-cased). A smaller version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handles only 5 languages (en, fr, es, de and zh) instead of 104 while reducing its size by 30%. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7573/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7573", "html_url": "https://github.com/huggingface/transformers/pull/7573", "diff_url": "https://github.com/huggingface/transformers/pull/7573.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7573.patch", "merged_at": 1602103094000 }
https://api.github.com/repos/huggingface/transformers/issues/7572
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7572/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7572/comments
https://api.github.com/repos/huggingface/transformers/issues/7572/events
https://github.com/huggingface/transformers/issues/7572
714,699,196
MDU6SXNzdWU3MTQ2OTkxOTY=
7,572
Finetuning T5: Keyword arguments not recognized.
{ "login": "MichaelJanz", "id": 66110831, "node_id": "MDQ6VXNlcjY2MTEwODMx", "avatar_url": "https://avatars.githubusercontent.com/u/66110831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MichaelJanz", "html_url": "https://github.com/MichaelJanz", "followers_url": "https://api.github.com/users/MichaelJanz/followers", "following_url": "https://api.github.com/users/MichaelJanz/following{/other_user}", "gists_url": "https://api.github.com/users/MichaelJanz/gists{/gist_id}", "starred_url": "https://api.github.com/users/MichaelJanz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichaelJanz/subscriptions", "organizations_url": "https://api.github.com/users/MichaelJanz/orgs", "repos_url": "https://api.github.com/users/MichaelJanz/repos", "events_url": "https://api.github.com/users/MichaelJanz/events{/privacy}", "received_events_url": "https://api.github.com/users/MichaelJanz/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,601
1,607
1,607
CONTRIBUTOR
null
# ❓ Questions & Help ## Details Hi, I want to finetune the T5-small model for summarization purposes, to finetune the T5-large later. I prepared my data as as shown in the examples but during training i receive the message: `Keyword arguments {'src_lang': None, 'tgt_lang': None, 'add_prefix_space': False} not recognized.` which indicates to me, that somehow my data preparation process is wrong. However, I was not able to find out how the data for T5 has to be prepared properly (as the T5 is a multi-ability model, the data has to be marked somehow, but I dont know how). Currently my train data looks as the follow: train.source: Line 1: A long text Line 2: Another long Text train.target: Line 1: 'target':'Target for the text' Line 2: 'target':'Another target for the text' This question is in my view to specific to the transformer huggingface-architecture as I use the `finetune_sh` script in the seq2seq example folder. A small example, of how the data has to be structured for T5 will be very helpful, thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7572/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7572/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7571
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7571/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7571/comments
https://api.github.com/repos/huggingface/transformers/issues/7571/events
https://github.com/huggingface/transformers/issues/7571
714,691,404
MDU6SXNzdWU3MTQ2OTE0MDQ=
7,571
Sequence Classification One-Hot Encoded Data
{ "login": "moritzblum", "id": 31183934, "node_id": "MDQ6VXNlcjMxMTgzOTM0", "avatar_url": "https://avatars.githubusercontent.com/u/31183934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moritzblum", "html_url": "https://github.com/moritzblum", "followers_url": "https://api.github.com/users/moritzblum/followers", "following_url": "https://api.github.com/users/moritzblum/following{/other_user}", "gists_url": "https://api.github.com/users/moritzblum/gists{/gist_id}", "starred_url": "https://api.github.com/users/moritzblum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moritzblum/subscriptions", "organizations_url": "https://api.github.com/users/moritzblum/orgs", "repos_url": "https://api.github.com/users/moritzblum/repos", "events_url": "https://api.github.com/users/moritzblum/events{/privacy}", "received_events_url": "https://api.github.com/users/moritzblum/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, if you want to implement a custom loss, you should not pass `labels` to the model, but instead retrieve the `hidden_states` and compute them as you would with any other PyTorch model.", "Thanks for your fast answer. I found out that nn.CrossEntropyLoss expects class indices and does not take one-hot encoded tensors as target labels. So there is no difference between your implementation and using one-hot encoded labels." ]
1,601
1,601
1,601
NONE
null
## Environment info - `transformers` version: 3.3.1 ### Who can help albert, bert, GPT2, XLM: @LysandreJik ## Information I am using Bert and Roberta. The tasks I am working on is: Sequence Classification ## Problem The model does not work with "One-Hot Encoded" data. The model only accepts a list of integers as labels which is feed into the MSELoss. This is undesired in a multi-label classification task with categorical data, because an order of the classes is induced.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7571/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7571/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7570
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7570/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7570/comments
https://api.github.com/repos/huggingface/transformers/issues/7570/events
https://github.com/huggingface/transformers/issues/7570
714,688,303
MDU6SXNzdWU3MTQ2ODgzMDM=
7,570
Import error for MarianMTModel
{ "login": "brian-o-mars", "id": 52781261, "node_id": "MDQ6VXNlcjUyNzgxMjYx", "avatar_url": "https://avatars.githubusercontent.com/u/52781261?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brian-o-mars", "html_url": "https://github.com/brian-o-mars", "followers_url": "https://api.github.com/users/brian-o-mars/followers", "following_url": "https://api.github.com/users/brian-o-mars/following{/other_user}", "gists_url": "https://api.github.com/users/brian-o-mars/gists{/gist_id}", "starred_url": "https://api.github.com/users/brian-o-mars/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brian-o-mars/subscriptions", "organizations_url": "https://api.github.com/users/brian-o-mars/orgs", "repos_url": "https://api.github.com/users/brian-o-mars/repos", "events_url": "https://api.github.com/users/brian-o-mars/events{/privacy}", "received_events_url": "https://api.github.com/users/brian-o-mars/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi, please fill-in the template or we won't be able to help you.", "Hello, I have updated it. Thanks", "This part is the most important part, please complete it:\r\n\r\n\r\n> ```\r\n> transformers version:\r\n> Platform:\r\n> Python version:\r\n> PyTorch version (GPU?):\r\n> Tensorflow version (GPU?):\r\n> Using GPU in script?:\r\n> Using distributed or parallel set-up in script?:\r\n> ```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,601
1,607
1,607
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): MarianMTModel The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) I tried importing MarianMTmodel from transformers and it raised an error message The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) My own task ## To reproduce Steps to reproduce the behavior: 1.from transformers import MarianMTModel, MarianTokenizer 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Imports MarianMTModel correctly <!-- A clear and concise description of what you would expect to happen. --> This is the error message I got: ImportError: cannot import name 'MarianMTModel' from 'transformers' (C:\Users\PRINCE\Anaconda3\envs\Brian's Enviroment\lib\site-packages\transformers\__init__.py)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7570/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7570/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7569
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7569/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7569/comments
https://api.github.com/repos/huggingface/transformers/issues/7569/events
https://github.com/huggingface/transformers/pull/7569
714,643,140
MDExOlB1bGxSZXF1ZXN0NDk3NjkyNzM5
7,569
Add Electra unexpected keys
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,601
1,601
MEMBER
null
This PR adds the necessary ELECTRA unexpected keys. Some keys are only used with models that have a different embedding size to their hidden size, in order to do the projection. Some models (such as the `large` ELECTRA variants), do not leverage these weights, as they have the same embedding/hidden sizes. Fixes #7530.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7569/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7569/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7569", "html_url": "https://github.com/huggingface/transformers/pull/7569", "diff_url": "https://github.com/huggingface/transformers/pull/7569.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7569.patch", "merged_at": 1601887779000 }
https://api.github.com/repos/huggingface/transformers/issues/7568
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7568/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7568/comments
https://api.github.com/repos/huggingface/transformers/issues/7568/events
https://github.com/huggingface/transformers/pull/7568
714,488,667
MDExOlB1bGxSZXF1ZXN0NDk3NTY0MjEz
7,568
[Model card] Java Code Summarizer model
{ "login": "ncoop57", "id": 7613470, "node_id": "MDQ6VXNlcjc2MTM0NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ncoop57", "html_url": "https://github.com/ncoop57", "followers_url": "https://api.github.com/users/ncoop57/followers", "following_url": "https://api.github.com/users/ncoop57/following{/other_user}", "gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}", "starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions", "organizations_url": "https://api.github.com/users/ncoop57/orgs", "repos_url": "https://api.github.com/users/ncoop57/repos", "events_url": "https://api.github.com/users/ncoop57/events{/privacy}", "received_events_url": "https://api.github.com/users/ncoop57/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,601
1,601
1,601
CONTRIBUTOR
null
Initial version of java code summarizer model for generating code comments.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7568/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7568", "html_url": "https://github.com/huggingface/transformers/pull/7568", "diff_url": "https://github.com/huggingface/transformers/pull/7568.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7568.patch", "merged_at": 1601887757000 }
https://api.github.com/repos/huggingface/transformers/issues/7567
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7567/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7567/comments
https://api.github.com/repos/huggingface/transformers/issues/7567/events
https://github.com/huggingface/transformers/issues/7567
714,476,806
MDU6SXNzdWU3MTQ0NzY4MDY=
7,567
Is training distilbert with TPU supported yet?
{ "login": "xinyiz1019", "id": 32743192, "node_id": "MDQ6VXNlcjMyNzQzMTky", "avatar_url": "https://avatars.githubusercontent.com/u/32743192?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xinyiz1019", "html_url": "https://github.com/xinyiz1019", "followers_url": "https://api.github.com/users/xinyiz1019/followers", "following_url": "https://api.github.com/users/xinyiz1019/following{/other_user}", "gists_url": "https://api.github.com/users/xinyiz1019/gists{/gist_id}", "starred_url": "https://api.github.com/users/xinyiz1019/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xinyiz1019/subscriptions", "organizations_url": "https://api.github.com/users/xinyiz1019/orgs", "repos_url": "https://api.github.com/users/xinyiz1019/repos", "events_url": "https://api.github.com/users/xinyiz1019/events{/privacy}", "received_events_url": "https://api.github.com/users/xinyiz1019/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Distillation is not supported (yet) on TPU. You can check the status of different example scripts [here](https://github.com/huggingface/transformers/tree/master/examples). Those supported by `Trainer` or `TFTrainer` or `pytorch-lightning` can be run on TPU, others cannot." ]
1,601
1,601
1,601
NONE
null
# 🚀 Feature request Hi! I tried training my own distilbert model with [this code](https://github.com/huggingface/transformers/blob/master/examples/distillation/train.py) using GPU, and it was a success. I'm wondering if training distilbert model with TPU is supported yet, or if there's any plan to release a new version in which TPU is supported?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7567/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7567/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7566
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7566/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7566/comments
https://api.github.com/repos/huggingface/transformers/issues/7566/events
https://github.com/huggingface/transformers/issues/7566
714,372,082
MDU6SXNzdWU3MTQzNzIwODI=
7,566
Trainer incorrectly checks pytorch version
{ "login": "Rexhaif", "id": 5154447, "node_id": "MDQ6VXNlcjUxNTQ0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rexhaif", "html_url": "https://github.com/Rexhaif", "followers_url": "https://api.github.com/users/Rexhaif/followers", "following_url": "https://api.github.com/users/Rexhaif/following{/other_user}", "gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions", "organizations_url": "https://api.github.com/users/Rexhaif/orgs", "repos_url": "https://api.github.com/users/Rexhaif/repos", "events_url": "https://api.github.com/users/Rexhaif/events{/privacy}", "received_events_url": "https://api.github.com/users/Rexhaif/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I am unsure on why you think this test is wrong: if this test is true, we import APEX. So if we check with `<=`, we will then try to import APEX which is exactly what we are trying to avoid.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,601
1,607
1,607
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Linux-3.10.0-862.el7.x86_64-x86_64-with-glibc2.27 - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes(not applicable) - Using distributed or parallel set-up in script?: not applicable ### Who can help @sgugger and @prajjwal1 (he added it, according to git blame) ## Information I'm running token classification example on my own data, and i've faced trouble with fp16 training with torch 1.6.0. Script says that i need apex installed to use fp16 option. However, apex is not required since torch 1.6.0 came out with native amp support. I've dived into trainer code, and found that there is version checking line: https://github.com/huggingface/transformers/blob/99cb924bfb6c4092bed9232bea3c242e27c6911f/src/transformers/trainer.py#L65 Apparently, it is slightly incorrect. There should be <= instead of <. So it will not try to import apex if torch version is greater OR EQUAL to 1.6 ```python import torch from packaging import version print(version.parse(torch.__version__) < version.parse("1.6")) # -> False print(version.parse(torch.__version__) <= version.parse("1.6")) # -> True ``` ## To reproduce 1. Install torch 1.6.0(and do not install apex) 2. clone repository 3. cd into examples/token-classification 4. add '--fp16' to the bottom of run.sh script 5. execute run.sh script ## Expected behavior Script works well on torch 1.6.0 without apex installed
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7566/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7565
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7565/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7565/comments
https://api.github.com/repos/huggingface/transformers/issues/7565/events
https://github.com/huggingface/transformers/issues/7565
714,353,256
MDU6SXNzdWU3MTQzNTMyNTY=
7,565
Two slow deberta test failures
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1834088753, "node_id": "MDU6TGFiZWwxODM0MDg4NzUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Tests", "name": "Tests", "color": "a6fcca", "default": false, "description": "Related to tests" } ]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[]
1,601
1,602
1,602
CONTRIBUTOR
null
https://github.com/huggingface/transformers/runs/1204063498?check_suite_focus=true ``` FAILED tests/test_modeling_deberta.py::DebertaModelIntegrationTest::test_inference_classification_head FAILED tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_torch_encode_plus_sent_to_model ``` @LysandreJik I think ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7565/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7565/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7564
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7564/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7564/comments
https://api.github.com/repos/huggingface/transformers/issues/7564/events
https://github.com/huggingface/transformers/pull/7564
714,291,898
MDExOlB1bGxSZXF1ZXN0NDk3NDE2NDY0
7,564
Update normalising method in oneshot classifier
{ "login": "sachinruk", "id": 1410927, "node_id": "MDQ6VXNlcjE0MTA5Mjc=", "avatar_url": "https://avatars.githubusercontent.com/u/1410927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sachinruk", "html_url": "https://github.com/sachinruk", "followers_url": "https://api.github.com/users/sachinruk/followers", "following_url": "https://api.github.com/users/sachinruk/following{/other_user}", "gists_url": "https://api.github.com/users/sachinruk/gists{/gist_id}", "starred_url": "https://api.github.com/users/sachinruk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sachinruk/subscriptions", "organizations_url": "https://api.github.com/users/sachinruk/orgs", "repos_url": "https://api.github.com/users/sachinruk/repos", "events_url": "https://api.github.com/users/sachinruk/events{/privacy}", "received_events_url": "https://api.github.com/users/sachinruk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pinging @joeddav ", "Hey @sachinruk, thanks for taking the time to contribute 🤗\r\n\r\nI take your point and the case that you've described can certainly happen with the current method, but I maintain what I said in the other thread that there's not really a single correct way of doing this. We're highjacking the outputs of a model trained on a completely different distribution (NLI data) for own own purposes, so it's just a game of figuring out what makes intuitive sense and what empirically works the best.\r\n\r\nI ran a quick benchmark on the AG's News topic classification dataset comparing the current method with the one you've proposed, and they performed similarly. I got a weighted F1 of ~70 with the entailment-only method used in the pipeline and around 68 with the method you've proposed.\r\n\r\nIf you can show that your proposed method empirically does significantly better, we could look into changing it or adding it as an additional kwarg to specify the method. But as is, I don't think it makes sense to change the behavior of the pipeline under people's feet." ]
1,601
1,602
1,602
CONTRIBUTOR
null
Hi, I know I raised this issue before here: https://github.com/huggingface/transformers/pull/5760#issuecomment-673840015 but I do think that it is worth taking a second look atleast. So my main concern is, since we are dealing with logits, currently all that happens is to take the softmax over the entailment logits. This does guarantee summing to one, however, it doesn't care for the scale of the logits. For example suppose the logits for two possible classes for a given sentence came out as: ``` [[1000, 10, 10], [0.1, 0.1, 0.9]] ``` at the current method would give class1 the higher probability, even though, it can clearly be seen that class1 is a contradiction (since it is 100x contradiction in the log scale), and similarly class2 is clearly an entailment. Now I do understand that this is an extreme case, but just to cover the bases, what I propose is that: 1. We softmax across every single sentence-class pair to get everything in the same scale. 2. Get a probability measurement over entailment by simply dividing by sum of probabilities. As seen [here](https://github.com/huggingface/transformers/pull/5760#issuecomment-673840015 ) the numbers that we get are different but only slightly for the example shown. I do apologise about raising this again, just simply trying to help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7564/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7564", "html_url": "https://github.com/huggingface/transformers/pull/7564", "diff_url": "https://github.com/huggingface/transformers/pull/7564.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7564.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7563
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7563/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7563/comments
https://api.github.com/repos/huggingface/transformers/issues/7563/events
https://github.com/huggingface/transformers/issues/7563
714,228,016
MDU6SXNzdWU3MTQyMjgwMTY=
7,563
Error Loading Gpt-2 model after training from scratch.
{ "login": "parthplc", "id": 35425925, "node_id": "MDQ6VXNlcjM1NDI1OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/35425925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/parthplc", "html_url": "https://github.com/parthplc", "followers_url": "https://api.github.com/users/parthplc/followers", "following_url": "https://api.github.com/users/parthplc/following{/other_user}", "gists_url": "https://api.github.com/users/parthplc/gists{/gist_id}", "starred_url": "https://api.github.com/users/parthplc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/parthplc/subscriptions", "organizations_url": "https://api.github.com/users/parthplc/orgs", "repos_url": "https://api.github.com/users/parthplc/repos", "events_url": "https://api.github.com/users/parthplc/events{/privacy}", "received_events_url": "https://api.github.com/users/parthplc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! How did you train your GPT-2 from scratch? Which framework did you use?", "> Hello! How did you train your GPT-2 from scratch? Which framework did you use?\r\n\r\nUsed PyTorch framework.", "Used this script\r\n```\r\nimport os\r\nimport gc\r\nimport glob\r\nimport torch\r\nimport pickle\r\nimport joblib\r\nfrom tqdm.auto import tqdm\r\nfrom pathlib import Path\r\nfrom tokenizers import ByteLevelBPETokenizer\r\nfrom transformers import GPT2Tokenizer\r\nimport torch\r\nfrom transformers import GPT2TokenizerFast\r\nfrom transformers import GPT2LMHeadModel\r\nfrom transformers import DataCollatorForLanguageModeling\r\nfrom transformers import TextDataset\r\nfrom transformers import GPT2Config\r\nfrom transformers import Trainer, TrainingArguments\r\n\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('hindi/')\r\nvocab_size = tokenizer.vocab_size \r\nprint(vocab_size)\r\nprint(torch.cuda.is_available())\r\n\r\n\r\nconfig = GPT2Config(\r\n vocab_size=vocab_size \r\n)\r\n\r\ntokenizer = GPT2TokenizerFast.from_pretrained(\"hindi/\", max_len=512)\r\nmodel = GPT2LMHeadModel(config=config)\r\nprint(model.num_parameters())\r\n\r\nprint(\"Now let's build our training Dataset\")\r\n\r\ndataset = TextDataset(\r\n tokenizer=tokenizer,\r\n file_path=\"data/train.txt\",\r\n block_size=132,\r\n)\r\n\r\nprint(\"Start Training\")\r\n\r\ndata_collator = DataCollatorForLanguageModeling(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15\r\n)\r\n\r\nprint(\"Trainer Classes\")\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"hindi/\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_gpu_train_batch_size=64,\r\n save_steps=10_000,\r\n save_total_limit=2,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=dataset,\r\n prediction_loss_only=True,\r\n)\r\n\r\ntrainer.train()\r\n\r\ntrainer.save_model(\"hindi/\")\r\nprint(\"done\")\r\n```\r\nfor training @LysandreJik " ]
1,601
1,602
1,602
NONE
null
``` Error(s) in loading state_dict for GPT2LMHeadModel: size mismatch for transformer.h.0.mlp.c_fc.weight: copying a param with shape torch.Size([768, 6]) from checkpoint, the shape in current model is torch.Size([768, 3072]). ``` I trained a GPT-2 model from scratch. When I tried loading the model using ``` config = GPT2Config.from_json_file('../input/hindigpt/config.json') # config.type_vocab_size=3072 model = GPT2LMHeadModel(config) model.load_state_dict(torch.load('../input/hindigpt/pytorch_model.bin')) ``` I am getting the above error. Here is the copy of my config.json file ``` { "activation_function": "gelu_new", "architectures": [ "GPT2LMHeadModel" ], "attn_pdrop": 0.1, "bos_token_id": 50256, "embd_pdrop": 0.1, "eos_token_id": 50256, "gradient_checkpointing": false, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "model_type": "gpt2", "n_ctx": 1024, "n_embd": 768, "n_head": 12, "n_inner": 6, "n_layer": 6, "n_positions": 512, "resid_pdrop": 0.1, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "vocab_size": 36021 } ``` Can anyone help me with this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7563/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7563/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7562
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7562/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7562/comments
https://api.github.com/repos/huggingface/transformers/issues/7562/events
https://github.com/huggingface/transformers/pull/7562
714,211,864
MDExOlB1bGxSZXF1ZXN0NDk3MzYxODI3
7,562
Output global_attentions in Longformer models
{ "login": "gui11aume", "id": 1017195, "node_id": "MDQ6VXNlcjEwMTcxOTU=", "avatar_url": "https://avatars.githubusercontent.com/u/1017195?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gui11aume", "html_url": "https://github.com/gui11aume", "followers_url": "https://api.github.com/users/gui11aume/followers", "following_url": "https://api.github.com/users/gui11aume/following{/other_user}", "gists_url": "https://api.github.com/users/gui11aume/gists{/gist_id}", "starred_url": "https://api.github.com/users/gui11aume/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gui11aume/subscriptions", "organizations_url": "https://api.github.com/users/gui11aume/orgs", "repos_url": "https://api.github.com/users/gui11aume/repos", "events_url": "https://api.github.com/users/gui11aume/events{/privacy}", "received_events_url": "https://api.github.com/users/gui11aume/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We are actually having a longer internal discussion about the general handling of different attentions - this might still take a couple of days to be decided.", "> Cool, wonderful that you added so many tests!\r\n> \r\n> @patrickvonplaten, you say:\r\n> \r\n> > if global attention was used the attentions were set to the global attentions and the local attentions were discarded\r\n> \r\n> In this case, wouldn't the best model output be one where there are both `local_attentions`, `global_attentions` **as well as** `attentions` that are kept simply for backwards compatibility?\r\n\r\nThe previous design led to errors as shown here: https://github.com/huggingface/transformers/issues/5646 -> so I think it's fine to break backwards compatibility here. `local_attentions` would arguably be a better name than `attentions`, but for consistency with other models and for a standard case where `global_attention_mask=None`, so that `local_attentions` == (all) `attentions`, I would prefer to keep the name `attentions` here. \r\n\r\n> \r\n> Other than that and the docstrings, LGTM!\r\n\r\nDocstrings will be corrected!\r\n\r\n", "Works for me!", "@gui11aume great work again - understanding longformer's attention is not straightforward and your doc string was spot-on! Looking forward to your next contribution ;-) Hope you're fine with the small changes I made", "@patrickvonplaten @lalitpagaria this merge PR breaks longformer training with gradient checkpointing True\r\nplease fix as i am unable to train models with latest models. \r\n\r\n\r\nerror comes on this line \r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/modeling_longformer.py#L1072\r\n\r\nmodel expect 2 to 6 positional arguments, but 7 where giving. ", "> @patrickvonplaten @lalitpagaria this merge PR breaks longformer training with gradient checkpointing True\r\n> please fix as i am unable to train models with latest models.\r\n> \r\n> error comes on this line\r\n> https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_longformer.py#L1072\r\n> \r\n> model expect 2 to 6 positional arguments, but 7 where giving.\r\n\r\nHey @manishiitg,\r\n\r\nThanks a lot for message! This did indeed break gradient checkpointing for longformer - sorry! This PR fixes it: https://github.com/huggingface/transformers/pull/8415" ]
1,601
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #7514 [From @patrickvonplaten]: This PR introduces a new structure for the output attentions in Longformer. There are two types of attentions in Longformer: local attention outputs and global attention outputs. Previously, if global attention was used the `attentions` were set to the global attentions and the local attentions were discarded. This is suboptimal as one has no access to the local attentions in this case. The better design IMO is to have both `attentions` and `global_attentions` in Longformer (similar to `encoder_attentions`, `decoder_attentions` in Seq2Seq and `attentions`, `ngram_attentions` in ProphetNet). Also, the PR switches from tuple indexing to using `ModelOutput` kwargs in the `test_attention_output` function which IMO we should bit by bit do for all tests from now on. In PT Longformer, the `is_global_attn_index` tensor is now only calculated once instead for each layer which slightly speeds up computation. Awesome job @gui11aume! Especially for the docstring -> the description of `global_attentions` and `attentions` is impeccable. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7562/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7562", "html_url": "https://github.com/huggingface/transformers/pull/7562", "diff_url": "https://github.com/huggingface/transformers/pull/7562.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7562.patch", "merged_at": 1604607044000 }
https://api.github.com/repos/huggingface/transformers/issues/7561
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7561/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7561/comments
https://api.github.com/repos/huggingface/transformers/issues/7561/events
https://github.com/huggingface/transformers/pull/7561
714,182,304
MDExOlB1bGxSZXF1ZXN0NDk3MzQyNTgx
7,561
Moved feature generation into getitem to save ram
{ "login": "mariusjohan", "id": 49961316, "node_id": "MDQ6VXNlcjQ5OTYxMzE2", "avatar_url": "https://avatars.githubusercontent.com/u/49961316?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariusjohan", "html_url": "https://github.com/mariusjohan", "followers_url": "https://api.github.com/users/mariusjohan/followers", "following_url": "https://api.github.com/users/mariusjohan/following{/other_user}", "gists_url": "https://api.github.com/users/mariusjohan/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariusjohan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariusjohan/subscriptions", "organizations_url": "https://api.github.com/users/mariusjohan/orgs", "repos_url": "https://api.github.com/users/mariusjohan/repos", "events_url": "https://api.github.com/users/mariusjohan/events{/privacy}", "received_events_url": "https://api.github.com/users/mariusjohan/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,601
1,608
1,608
NONE
null
# What does this PR do? I had this issue that my Colab Notebook would run out of memory when creating the feature in the SQuAD dataset, so I basically just moved it to the getitem section. I also added a function to force create the features immediatly incase somehow for some reason would want the features created in the beginning Fixes # (issue) ## Before submitting - I submitted an issue, but never got any feedback. - I did not update the documentation due to the fact that it doesn't exist - I did not write tests either since I couldn't get the tests to work out on my machine. However I did test it while training the model. However I could easily write tests if you believe it's necessary. ## Who can review? @patrickvonplaten, maybe you can take a look at the pull request?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7561/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7561", "html_url": "https://github.com/huggingface/transformers/pull/7561", "diff_url": "https://github.com/huggingface/transformers/pull/7561.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7561.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7560
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7560/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7560/comments
https://api.github.com/repos/huggingface/transformers/issues/7560/events
https://github.com/huggingface/transformers/pull/7560
714,175,378
MDExOlB1bGxSZXF1ZXN0NDk3MzM3ODA2
7,560
Remove labels from the RagModel example
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,601
1,601
COLLABORATOR
null
# What does this PR do? As pointed out in #7554, the `RagModel` does not accept `labels`. Not sure why they are in the documentation. This PR fixes that. Fixes #7554
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7560/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7560/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7560", "html_url": "https://github.com/huggingface/transformers/pull/7560", "diff_url": "https://github.com/huggingface/transformers/pull/7560.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7560.patch", "merged_at": 1601847564000 }
https://api.github.com/repos/huggingface/transformers/issues/7559
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7559/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7559/comments
https://api.github.com/repos/huggingface/transformers/issues/7559/events
https://github.com/huggingface/transformers/issues/7559
714,155,124
MDU6SXNzdWU3MTQxNTUxMjQ=
7,559
Is this realy a list or a Dict[str, int]? I think the docstring might be wrong because in the model json file it is stored as a dict.
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,601
1,601
CONTRIBUTOR
null
https://github.com/huggingface/transformers/blob/9bdce3a4f91c6d53873582b0210e61c92bba8fd3/src/transformers/configuration_utils.py#L117 See here for example: https://s3.amazonaws.com/models.huggingface.co/bert/oliverguhr/german-sentiment-bert/config.json
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7559/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7559/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7558
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7558/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7558/comments
https://api.github.com/repos/huggingface/transformers/issues/7558/events
https://github.com/huggingface/transformers/pull/7558
714,145,143
MDExOlB1bGxSZXF1ZXN0NDk3MzE2NTQw
7,558
[Model card] SinhalaBERTo model.
{ "login": "keshan", "id": 94397, "node_id": "MDQ6VXNlcjk0Mzk3", "avatar_url": "https://avatars.githubusercontent.com/u/94397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keshan", "html_url": "https://github.com/keshan", "followers_url": "https://api.github.com/users/keshan/followers", "following_url": "https://api.github.com/users/keshan/following{/other_user}", "gists_url": "https://api.github.com/users/keshan/gists{/gist_id}", "starred_url": "https://api.github.com/users/keshan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keshan/subscriptions", "organizations_url": "https://api.github.com/users/keshan/orgs", "repos_url": "https://api.github.com/users/keshan/repos", "events_url": "https://api.github.com/users/keshan/events{/privacy}", "received_events_url": "https://api.github.com/users/keshan/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Thanks for sharing. If you'd like you can contribute sample inputs for Sinhala at https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts – Thanks!" ]
1,601
1,602
1,602
CONTRIBUTOR
null
This is the model card for keshan/SinhalaBERTo model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7558/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7558", "html_url": "https://github.com/huggingface/transformers/pull/7558", "diff_url": "https://github.com/huggingface/transformers/pull/7558.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7558.patch", "merged_at": 1602103253000 }
https://api.github.com/repos/huggingface/transformers/issues/7557
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7557/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7557/comments
https://api.github.com/repos/huggingface/transformers/issues/7557/events
https://github.com/huggingface/transformers/pull/7557
714,138,844
MDExOlB1bGxSZXF1ZXN0NDk3MzEyMTEy
7,557
Enable debug with TF2 and eager execution
{ "login": "Neptune-Trojans", "id": 68503564, "node_id": "MDQ6VXNlcjY4NTAzNTY0", "avatar_url": "https://avatars.githubusercontent.com/u/68503564?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Neptune-Trojans", "html_url": "https://github.com/Neptune-Trojans", "followers_url": "https://api.github.com/users/Neptune-Trojans/followers", "following_url": "https://api.github.com/users/Neptune-Trojans/following{/other_user}", "gists_url": "https://api.github.com/users/Neptune-Trojans/gists{/gist_id}", "starred_url": "https://api.github.com/users/Neptune-Trojans/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Neptune-Trojans/subscriptions", "organizations_url": "https://api.github.com/users/Neptune-Trojans/orgs", "repos_url": "https://api.github.com/users/Neptune-Trojans/repos", "events_url": "https://api.github.com/users/Neptune-Trojans/events{/privacy}", "received_events_url": "https://api.github.com/users/Neptune-Trojans/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello!\r\n\r\nThanks for you PR. The reason we used `tf.gradients` is because it handles `None` gradients while `tape.gradients` don't, so for now unless we can find a better way to handle the `None` values for gradients, we will keep it like this.\r\n\r\nAlso the training is forced to be done in graph compilation with `tf.function` so the eager mode is anyway disactivated.", "Thanks for detailed answer.\r\nAnother reason I did the change is to deal with following error that I getting after upgrading to the laters version of transformers library.\r\n`ValueError: distributed_training_steps() should not modify its Python input arguments. Check if it modifies any lists or dicts passed as arguments. Modifying a copy is allowed.`\r\n\r\nThis error disappears after my change.\r\ntensorflow==2.3.1\r\ntransformers==3.3.1", "Ok, can you open an issue with the details on how to reproduce the error please.", "I am closing this pull request as I found workaround regarding my issue issue with **parameter modification** .\r\n" ]
1,601
1,601
1,601
NONE
null
TF2 with eager execution and tf.gradients not supported anymore. In order to run the code with eager execution and not collapse instead of tf.gradients should be tf.GradientTape. [how-to-compute-gradient-of-output-wrt-input-in-tensorflow-2-0](https://stackoverflow.com/questions/59145221/how-to-compute-gradient-of-output-wrt-input-in-tensorflow-2-0)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7557/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7557/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7557", "html_url": "https://github.com/huggingface/transformers/pull/7557", "diff_url": "https://github.com/huggingface/transformers/pull/7557.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7557.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7556
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7556/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7556/comments
https://api.github.com/repos/huggingface/transformers/issues/7556/events
https://github.com/huggingface/transformers/issues/7556
714,112,878
MDU6SXNzdWU3MTQxMTI4Nzg=
7,556
Problem with automatic best model loading.
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It's hard to know what's going on without seeing your code. The error indicates `args.eval_steps = 0` and this argument is not modified by the `Trainer` itself, so you should make sure you did set it to something >0.", "You are right. I had `eval_steps = 0`. Thanks for the feedback!" ]
1,601
1,601
1,601
CONTRIBUTOR
null
When I provide `load_best_model_at_end=True`, `metric_for_best_model='eval_f1_macro` and `greater_is_better=True` together with `save_total_limit=2` this happens: ``` Traceback (most recent call last): File "/home/ubuntu/miniconda3/envs/hf/lib/python3.7/site-packages/optuna/study.py", line 778, in _run_trial result = func(trial) File "train_aws.py", line 174, in opt trainer.train() File "/home/ubuntu/miniconda3/envs/hf/lib/python3.7/site-packages/transformers/trainer.py", line 810, in train and self.global_step % self.args.eval_steps == 0 ZeroDivisionError: integer division or modulo by zero Traceback (most recent call last): File "train_aws.py", line 197, in <module> study.optimize(opt) File "/home/ubuntu/miniconda3/envs/hf/lib/python3.7/site-packages/optuna/study.py", line 328, in optimize func, n_trials, timeout, catch, callbacks, gc_after_trial, None File "/home/ubuntu/miniconda3/envs/hf/lib/python3.7/site-packages/optuna/study.py", line 726, in _optimize_sequential self._run_trial_and_callbacks(func, catch, callbacks, gc_after_trial) File "/home/ubuntu/miniconda3/envs/hf/lib/python3.7/site-packages/optuna/study.py", line 755, in _run_trial_and_callbacks trial = self._run_trial(func, catch, gc_after_trial) File "/home/ubuntu/miniconda3/envs/hf/lib/python3.7/site-packages/optuna/study.py", line 778, in _run_trial result = func(trial) File "train_aws.py", line 174, in opt trainer.train() File "/home/ubuntu/miniconda3/envs/hf/lib/python3.7/site-packages/transformers/trainer.py", line 810, in train and self.global_step % self.args.eval_steps == 0 ZeroDivisionError: integer division or modulo by zero ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7556/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7556/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7555
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7555/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7555/comments
https://api.github.com/repos/huggingface/transformers/issues/7555/events
https://github.com/huggingface/transformers/pull/7555
714,079,519
MDExOlB1bGxSZXF1ZXN0NDk3MjY4MzU0
7,555
Update Code example according to deprecation of AutoModeWithLMHead
{ "login": "jshamg", "id": 32615911, "node_id": "MDQ6VXNlcjMyNjE1OTEx", "avatar_url": "https://avatars.githubusercontent.com/u/32615911?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jshamg", "html_url": "https://github.com/jshamg", "followers_url": "https://api.github.com/users/jshamg/followers", "following_url": "https://api.github.com/users/jshamg/following{/other_user}", "gists_url": "https://api.github.com/users/jshamg/gists{/gist_id}", "starred_url": "https://api.github.com/users/jshamg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jshamg/subscriptions", "organizations_url": "https://api.github.com/users/jshamg/orgs", "repos_url": "https://api.github.com/users/jshamg/repos", "events_url": "https://api.github.com/users/jshamg/events{/privacy}", "received_events_url": "https://api.github.com/users/jshamg/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "@julien-c what is the test that failed testing? i mean, there is no test that should fail because of my minor changes..." ]
1,601
1,601
1,601
CONTRIBUTOR
null
'The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.' I dont know how to change the 'How to use this model directly from the 🤗/transformers library:' part since it is not part of the model-paper # What does this PR do? Fix the future deprecation of `AutoModelWithLMHead` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @julien-c <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7555/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7555", "html_url": "https://github.com/huggingface/transformers/pull/7555", "diff_url": "https://github.com/huggingface/transformers/pull/7555.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7555.patch", "merged_at": 1601900482000 }
https://api.github.com/repos/huggingface/transformers/issues/7554
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7554/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7554/comments
https://api.github.com/repos/huggingface/transformers/issues/7554/events
https://github.com/huggingface/transformers/issues/7554
714,076,237
MDU6SXNzdWU3MTQwNzYyMzc=
7,554
RAG: error in outputs = model(input_ids=input_ids, labels=input_dict["labels"])
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, the `RagModel` just contains the bare model and has no training objective (like all HF `XxxModel`). It doesn't take a `labels` argument, the example in the documentation is wrong." ]
1,601
1,601
1,601
CONTRIBUTOR
null
I tried the following code given in the documentation. ``` from transformers import RagTokenizer, RagRetriever, RagModel import torch tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-base") retriever = RagRetriever.from_pretrained("facebook/rag-token-base", index_name="exact", use_dummy_dataset=True) # initialize with RagRetriever to do everything in one forward call model = RagModel.from_pretrained("facebook/rag-token-base", retriever=retriever) input_dict = tokenizer.prepare_seq2seq_batch("How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="pt") input_ids = input_dict["input_ids"] outputs = model(input_ids=input_ids, labels=input_dict["labels"]) ``` In the last step, it gives the following error. **/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'labels'**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7554/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7553
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7553/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7553/comments
https://api.github.com/repos/huggingface/transformers/issues/7553/events
https://github.com/huggingface/transformers/pull/7553
714,070,658
MDExOlB1bGxSZXF1ZXN0NDk3MjYxODE5
7,553
[model_card] bert-base-5lang-cased
{ "login": "amineabdaoui", "id": 17952908, "node_id": "MDQ6VXNlcjE3OTUyOTA4", "avatar_url": "https://avatars.githubusercontent.com/u/17952908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amineabdaoui", "html_url": "https://github.com/amineabdaoui", "followers_url": "https://api.github.com/users/amineabdaoui/followers", "following_url": "https://api.github.com/users/amineabdaoui/following{/other_user}", "gists_url": "https://api.github.com/users/amineabdaoui/gists{/gist_id}", "starred_url": "https://api.github.com/users/amineabdaoui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amineabdaoui/subscriptions", "organizations_url": "https://api.github.com/users/amineabdaoui/orgs", "repos_url": "https://api.github.com/users/amineabdaoui/repos", "events_url": "https://api.github.com/users/amineabdaoui/events{/privacy}", "received_events_url": "https://api.github.com/users/amineabdaoui/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,601
1,601
1,601
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7553/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7553/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7553", "html_url": "https://github.com/huggingface/transformers/pull/7553", "diff_url": "https://github.com/huggingface/transformers/pull/7553.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7553.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7552
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7552/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7552/comments
https://api.github.com/repos/huggingface/transformers/issues/7552/events
https://github.com/huggingface/transformers/pull/7552
714,062,850
MDExOlB1bGxSZXF1ZXN0NDk3MjU1OTMz
7,552
Add batch inferencing support for GPT2LMHeadModel
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This enables significantly faster generation. \r\nHere is a simple test I ran.\r\n| | generate 20 tokens | generate 100 tokens |\r\n|-----------------|---------------|----------------|\r\n| batch size = 1 | 45.2 s | 3min 42s |\r\n| batch size = 32 | 2.25 s (20x) | 8.36 s (26.5x) |\r\n\r\n```python\r\n# following above code\r\ndata = sentences * 128 # total 256 sentences\r\nmodel.cuda();\r\ndata = [' '.join([x]*10) for x in data] # make the prompt longer to be more realistic\r\nfrom tqdm.auto import tqdm\r\n\r\ndef test(batchsize = 1, max_gen_len = 20):\r\n for i in tqdm(range(0, len(data), batchsize)):\r\n batch = data[i: i+batchsize]\r\n inputs = tokenizer(batch, return_tensors=\"pt\", padding=True)\r\n\r\n output_sequences = model.generate(\r\n input_ids=inputs['input_ids'].to(model.device),\r\n attention_mask=inputs['attention_mask'].to(model.device),\r\n do_sample=False, # disable sampling to test if batching affects output\r\n pad_token_id=tokenizer.eos_token_id,\r\n max_length=len(inputs['input_ids'][0]) + max_gen_len, # let it generate longer\r\n )\r\n outputs = [tokenizer.decode(x) for x in output_sequences]\r\n\r\n\r\n%time test(1, 20)\r\n\r\n%time test(32, 20)\r\n\r\n%time test(1, 100)\r\n\r\n%time test(32, 100)\r\n```\r\n", "Hey @cccntu - this is a great addition! I very much like your appraoch here. \r\nI also checked that all GPT2 SLOW tests function correctly and added a test to make sure batch generation works as expected!\r\n\r\nWith the current implementation, the user would not be able to define his own `position_ids` for generate, since they are always overwritten in the `prepare_input_ids_for_generation`, but I think this is OK because:\r\n1) Previously, it was impossible for the user to use `position_ids` because they would have to be extended by 1 each generation step - a feature which is not implemented\r\n2) I don't see any reason why position_ids should be different from the way it is implement in the PR right now\r\n\r\n@LysandreJik - this feature was heavily requested by the community (linked a couple of issues below) and I think this is a great way to handle GPT2 batch generation. What do you think?", "Related issues: https://github.com/huggingface/transformers/issues/6742, https://github.com/huggingface/transformers/issues/4746,\r\nhttps://github.com/huggingface/transformers/issues/4824\r\n\r\n", "@cccntu - Great work on this PR! If this PR is merged and you want to help the community a tiny bit more, you could give a short description (similar to what you've done above) on how to do batch generation with GPT2 here: https://discuss.huggingface.co/t/batch-generation-with-gpt2/1517. Many people have been asking for this so they would be very glad to see a short forum post about it. \r\n\r\nThanks a lot again! ", "Awesome, great work @cccntu ! It would be amazing if you could write a little description of how your PR works on the forum: https://discuss.huggingface.co/t/batch-generation-with-gpt2/1517 - the community would be very thankful I think :-) ", "@patrickvonplaten Thanks for the suggestions! I just added some description to the forum post. 😃 \r\n\r\nlink to the post for future reference: https://discuss.huggingface.co/t/batch-generation-with-gpt2/1517/2", "Can you please add batch inferencing for GPT2DoubleHeadsModel too?", "@patrickvonplaten @cccntu \r\n\r\nI can see how batch generation is now available. I was wondering, if there's already a way to do the same but with different arguments of `max_len` & `min_length` per encoded_text in a batch in `model.generate()`. Goal here is to generate new text for a batch of encoded text with variable size.", "Hi @spate141, \r\n\r\nDid you mean passing a `max_len` & `min_length` as n-element array?\r\nIt would fail here: https://github.com/huggingface/transformers/blob/121dd4332b7e44932b0fbe2fa18bc9fa0131402c/src/transformers/generation_utils.py#L289\r\nActually, the main issue is here: https://github.com/huggingface/transformers/blob/121dd4332b7e44932b0fbe2fa18bc9fa0131402c/src/transformers/generation_utils.py#L539\r\nWe need the right-most logits not be padding, and without modifying `generation_utils.py`, we need to use left-padding, and consequently we need this PR to make sure the positional embedding is correct.\r\n\r\nYou can also checkout the discussions in #3021, or the forum post: https://discuss.huggingface.co/t/batch-generation-with-gpt2/1517/3 \r\n", "> Did you mean passing a `max_len` & `min_length` as n-element array?\r\n- Yes, exactly! Instead of single int values for all texts in a batch... an array of values for each text in a batch.\r\n\r\nI saw the code and I can see why it will fail. https://github.com/huggingface/transformers/issues/3021 seems informative, I'll take a look.\r\n\r\n#### Meanwhile I found this way to get what I mentioned:\r\n- Let's assume a model accepts input of `max_len = 64` and we want to generate new text for a piece of text of size 300 tokens. \r\n- Since we know what's the `max_len` is, we have make sure that we split our input text into 5 batches: `[64, 60, 58, 50, 56, 12]`. \r\n - This was done in some clever way to ensure that each text segment follows valid grammar rule and also don't go above that `max_len` limit. \r\n- For all these 6 text segments we want to generate new text with following min, max values:\r\n - min_values: `[100, 100, 100, 100, 100, 25]`\r\n - max_values: `[120, 120, 120, 120, 120, 50]`\r\n- To do that, I can just pass a global min & max values (i.e. 100, 120 respectively) to `model.generate()` along with a tokenized batch of input text segments. \r\n - input_ids_shape: `(6, 64)`, min_len: `100`, max_len: `120`\r\n- My only issue here is regarding last text segment in a batch of (6, 64) tokenized tensor. Ideally, we want new generated text of size min of 25 tokens and max of 50 tokens. Generating a new text of size 100 tokens from an input of 12 tokens will be gobbledygook. \r\n- To handle this, I can just take the last segment of generated text that belongs to our last input text; and split the text and discard everything above its ideal original min/max limit, i.e. (25, 50)\r\n\r\nOR\r\n- I can just go with doing same but I combine first 5 text segments and generate text on (5, 64) and generate text for the last one (1, 64) in two pass\r\n\r\nOR\r\n- I can just generate everything in 6 pass for each 6 text segments and pass their ideal individual min/max limits\r\n\r\n@cccntu In your 2nd comment to this pull request, you posted some impressive results on why doing batch_generation is ideal, specially let's say when you have a GPU. I'm just trying to figure out if doing the same in my case is worth the latency when I have to do some post-processing. I'll post some latency results once I have this setup ready.\r\n", "**Update:** @cccntu \r\n\r\nI went with my 1st approach where I'm generating text for all texts in a single batch with global min, max values. In most cases where my last text chunk in batch is _smaller_ meaning its min/max values are smaller than rest of text chunks in a same batch; I'm just trimming tokens. Results are impressive so far. Some numbers just in case someone stumble upon this thread in future:\r\n\r\n**Fixed size text batches:**\r\n- This shows when passing list of text chunks as single batch tensor Vs passing text chunks as individual in for loop. `max_len`, `min_len` variables are kept same in both. Y-axis shows total time in seconds for model to finish generating text.\r\n- All the text chunks are of same size.\r\n\r\n![image](https://user-images.githubusercontent.com/10580847/109713174-81d48000-7b66-11eb-94a6-d0c3e6ac77b8.png)\r\n\r\n**Variable size text batches:**\r\n- Same as above, but here I'm using variable size text chunks.\r\n- For example: `2 Long, 1 Short` means my input is 2 long size texts + 1 short size text. This is to test what happens when I'm generating text for variable size text chunks in a single batch.\r\n- Also to note that I'm trimming generated text for short text chunks in post processing. So, time on Y-axis include that.\r\n\r\n![image](https://user-images.githubusercontent.com/10580847/109713189-87ca6100-7b66-11eb-8859-471c6929668d.png)\r\n\r\nOverall, batch text generation seems very useful(🎉) despite one has to add some overhead on top to manage some use cases. ", "@cccntu Thanks for your great work! I stumbled upon this thread and would like to know:\r\n1. Would this batching mechanism works for GPT-NEO? \r\n2. Would this batching mechanism works for pipeline inference?\r\nIf so, is there any changes or considerations I need to do or know?", "Thanks for the code! I wonder if now I could generate sentences in a batch withother models (BertGeneration, for instance)? Looking forward to your reply!", "@cccntu Thanks for your code. By using the correct position_id in this case, we can do batch inference in pytorch model now.\r\n\r\nBut when we export the gpt2 model to onnx with `GPT2OnnxConfig`\r\n\r\n```python\r\nonnx_config = GPT2OnnxConfig(model.config)\r\n## or using past_key_values mode\r\n# onnx_config = GPT2OnnxConfig(model.config, use_past=True)\r\n```\r\n\r\nThen the onnx model inputs don't contation position_id but only input_ids nand attention_masks。\r\nSo we can't do correct batch_inference with onnx model now, right?\r\n", "Thank you for the code. I wonder if you have tested whether there is performance drop when using batch generation? Especially when the GPT-2 model is finetuned with right-padded data." ]
1,601
1,675
1,602
CONTRIBUTOR
null
# What does this PR do? This adds correct (absolute) positional embedding to the output, when given attention mask. The positional embedding is calculated using attention mask. Fixes #3021 Here is an example usage: ```python from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2', return_dict=True) # when generating, we will use the logits of right-most token to predict the next token # so the padding should be on the left tokenizer.padding_side = "left" tokenizer.pad_token = tokenizer.eos_token # to avoid an error sentences = ["Hello, my dog is a little", "Hello, my dog is", # use different length sentences to test batching ] inputs = tokenizer(sentences, return_tensors="pt", padding=True) output_sequences = model.generate( input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], do_sample=False, # disable sampling to test if batching affects output ) for i in range(len(sentences)): print(tokenizer.decode(output_sequences[i])) # you can use skip_special_tokens=True in decode() to remove padding token # but note that it will also remove other special_tokens ``` outputs: ``` Hello, my dog is a little bit of a mess. I'm not sure if he's going <|endoftext|><|endoftext|>Hello, my dog is a little bit of a mess. I'm not sure if he ``` comment: * I think this should be used in `examples/text-generation/run_generation.py`, but I don't know much about other models, and it (code) would be weird if only gpt2 supports batch inferencing. albert, bert, GPT2, XLM: @LysandreJik TextGeneration: @TevenLeScao documentation: @sgugger @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7552/reactions", "total_count": 15, "+1": 7, "-1": 0, "laugh": 0, "hooray": 5, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7552/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7552", "html_url": "https://github.com/huggingface/transformers/pull/7552", "diff_url": "https://github.com/huggingface/transformers/pull/7552.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7552.patch", "merged_at": 1602675625000 }
https://api.github.com/repos/huggingface/transformers/issues/7551
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7551/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7551/comments
https://api.github.com/repos/huggingface/transformers/issues/7551/events
https://github.com/huggingface/transformers/issues/7551
714,046,914
MDU6SXNzdWU3MTQwNDY5MTQ=
7,551
RAG: NameError: name 'load_dataset' is not defined
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think this is a duplicate of #7536. RAG requires datasets and faiss to be installed in your environment to work properly.\r\nThe fix with proper error messages is on in #7537." ]
1,601
1,602
1,602
CONTRIBUTOR
null
I tried to load RAG according to the documentation. ` retriever = RagRetriever.from_pretrained("facebook/rag-token-base", index_name="exact", use_dummy_dataset=True) ` The above line gave the following error. **/python3.6/site-packages/transformers/retrieval_rag.py", line 220, in __init__ self.dataset = load_dataset( NameError: name 'load_dataset' is not defined**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7551/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7551/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7550
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7550/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7550/comments
https://api.github.com/repos/huggingface/transformers/issues/7550/events
https://github.com/huggingface/transformers/issues/7550
714,021,911
MDU6SXNzdWU3MTQwMjE5MTE=
7,550
Problem with Finetuned GPT-2
{ "login": "51naa", "id": 53081885, "node_id": "MDQ6VXNlcjUzMDgxODg1", "avatar_url": "https://avatars.githubusercontent.com/u/53081885?v=4", "gravatar_id": "", "url": "https://api.github.com/users/51naa", "html_url": "https://github.com/51naa", "followers_url": "https://api.github.com/users/51naa/followers", "following_url": "https://api.github.com/users/51naa/following{/other_user}", "gists_url": "https://api.github.com/users/51naa/gists{/gist_id}", "starred_url": "https://api.github.com/users/51naa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/51naa/subscriptions", "organizations_url": "https://api.github.com/users/51naa/orgs", "repos_url": "https://api.github.com/users/51naa/repos", "events_url": "https://api.github.com/users/51naa/events{/privacy}", "received_events_url": "https://api.github.com/users/51naa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This problem was resolved with 10 epochs of training." ]
1,601
1,601
1,601
NONE
null
Hi, I have written this code to finetune gpt-2 on a new corpus. ``` from transformers import ( AutoModelWithLMHead, AutoConfig, Trainer, AutoTokenizer, TextDataset, DataCollatorForLanguageModeling, TrainingArguments) def modelTrainer(text_path, output_dir, batch_size=2, conf='gpt2', cache_dir='./Cache'): config = AutoConfig.from_pretrained(conf) model = AutoModelWithLMHead.from_config(config) tokenizer = AutoTokenizer.from_pretrained(conf) data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) train_dataset = TextDataset( tokenizer=tokenizer, file_path=text_path, block_size=128, cache_dir=cache_dir ) training_args =TrainingArguments( output_dir=output_dir, num_train_epochs=1, per_device_train_batch_size=batch_size, warmup_steps=500, save_steps=500, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, prediction_loss_only=True ) trainer.train() trainer.save_model() ``` And then I use this to generate text from the finetuned model: ``` from transformers import pipeline, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('gpt2') def textGenerator(model_dir): gen = pipeline('text-generation', model=model_dir, tokenizer=tokenizer) return gen ``` Now my problem is that even with 1 epoch a training, the quality of generated text deteriorates drastically and I get some unknown tokens in the output, like: 'Hello,�\n\n,�\n”,\n and\n,,� the\n,\n\n the,\n� Alice\n\n,“\n the\n,,,“ on”, to� she the�\n'. I'm guessing there is a problem with tokenizer. Can anybody help me with this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7550/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7550/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7549
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7549/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7549/comments
https://api.github.com/repos/huggingface/transformers/issues/7549/events
https://github.com/huggingface/transformers/issues/7549
713,952,908
MDU6SXNzdWU3MTM5NTI5MDg=
7,549
Incorrect tokenization with tokens added using tokenizer.add_tokens()
{ "login": "Muks14x", "id": 11333048, "node_id": "MDQ6VXNlcjExMzMzMDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/11333048?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Muks14x", "html_url": "https://github.com/Muks14x", "followers_url": "https://api.github.com/users/Muks14x/followers", "following_url": "https://api.github.com/users/Muks14x/following{/other_user}", "gists_url": "https://api.github.com/users/Muks14x/gists{/gist_id}", "starred_url": "https://api.github.com/users/Muks14x/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Muks14x/subscriptions", "organizations_url": "https://api.github.com/users/Muks14x/orgs", "repos_url": "https://api.github.com/users/Muks14x/repos", "events_url": "https://api.github.com/users/Muks14x/events{/privacy}", "received_events_url": "https://api.github.com/users/Muks14x/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Pinging @n1t0 for advice.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,601
1,607
1,607
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: macOS-10.15.6-x86_64-i386-64bit - Python version: 3.8.3 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> It's a tokenization issue, so tagging @mfuntowicz Also happens with rust tokenizers, so tagging @n1t0 ## Information Model I am using (Bert, XLNet ...): RoBERTa (but happens anywhere) The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] irrelevant to the bug * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python >>> from transformers import RobertaTokenizer >>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base') >>> tokenizer.add_tokens(['\U00030001', '\U00030002', '\U00030002\U00030001']) 3 >>> tokenizer.tokenize('\U00030002\U00030001') ['\U00030002\U00030001'] >>> tokenizer.tokenize('\U00030001\U00030002\U00030001') ## produces incorrect output. the last two tokens should've been together and should not have gotten split ['\U00030001', '\U00030002', '\U00030001'] >>> tokenizer.unique_no_split_tokens ['\U00030001', '<s>', '</s>', '<unk>', '\U00030002\U00030001', '<mask>', '<pad>', '\U00030002'] >>> tokenizer.unique_no_split_tokens.sort(key=lambda x: -len(x)) ## On sorting the unique_no_split_tokens by the lengths, this seems to get fixed. I suspect that internally the code is checking the presence of added tokens in this order? >>> tokenizer.unique_no_split_tokens ['<mask>', '<unk>', '<pad>', '</s>', '<s>', '\U00030002\U00030001', '\U00030001', '\U00030002'] >>> tokenizer.tokenize('\U00030001\U00030002\U00030001') ['\U00030001', '\U00030002\U00030001'] ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Tokenization seems to depend on the order in which tokens get added to the model. Just to show what happens, I've added some (very high valued) unicode character tokens and run the tokenization. Basically, '\U00030002', '\U00030001' got split the first time, which should not have happened since ''\U00030002\U00030001' is part of the vocabulary. On sorting the tokenizer.unique_no_split_tokens list by length, it seems to fix this issue. This makes me uneasy using add_tokens now with tokens that share overlaps. Also, the problem persists with the rust tokenizers library (RobertaTokenizerFast). But I don't want to open up another issue without first making sure that this is an issue. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7549/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7549/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7548
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7548/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7548/comments
https://api.github.com/repos/huggingface/transformers/issues/7548/events
https://github.com/huggingface/transformers/issues/7548
713,950,660
MDU6SXNzdWU3MTM5NTA2NjA=
7,548
Longformer2Roberta: global_attention_mask is never used
{ "login": "alexyalunin", "id": 23011284, "node_id": "MDQ6VXNlcjIzMDExMjg0", "avatar_url": "https://avatars.githubusercontent.com/u/23011284?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexyalunin", "html_url": "https://github.com/alexyalunin", "followers_url": "https://api.github.com/users/alexyalunin/followers", "following_url": "https://api.github.com/users/alexyalunin/following{/other_user}", "gists_url": "https://api.github.com/users/alexyalunin/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexyalunin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexyalunin/subscriptions", "organizations_url": "https://api.github.com/users/alexyalunin/orgs", "repos_url": "https://api.github.com/users/alexyalunin/repos", "events_url": "https://api.github.com/users/alexyalunin/events{/privacy}", "received_events_url": "https://api.github.com/users/alexyalunin/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "This should be solved soon by the new `generate()` design: https://github.com/huggingface/transformers/pull/6949", "Probably still takes ~1,2 weeks until merge", "@patrickvonplaten are you sure you mentioned the correct issue? The issue about the `generate()` function was this one #7489 \r\nIn the current issue I mention that `global_attention_mask` is never used during training. ", "You are 100% correct @alexyalunin :D - sorry my bad! Thanks for linking the correct issue!", "Regarding this issue, I will add more scripts showing how Longformer2Roberta can be trained. I'll pay special attention to your issue here then :-)", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,601
1,609
1,609
NONE
null
I was following the Longformer2Roberta tutorial https://github.com/huggingface/transformers/blob/master/model_cards/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16/README.md, it seems like 'global_attention_mask' is never used because this column is removed after https://github.com/huggingface/transformers/blob/9bdce3a4f91c6d53873582b0210e61c92bba8fd3/src/transformers/trainer.py#L301 so you have to add this column to the signature (whatever it is) https://github.com/huggingface/transformers/blob/9bdce3a4f91c6d53873582b0210e61c92bba8fd3/src/transformers/trainer.py#L324 or set `remove_unused_columns=False` in TrainingArguments @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7548/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7548/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7547
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7547/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7547/comments
https://api.github.com/repos/huggingface/transformers/issues/7547/events
https://github.com/huggingface/transformers/issues/7547
713,929,722
MDU6SXNzdWU3MTM5Mjk3MjI=
7,547
Converting Tensorflow checkpoint to Pytorch not work for TF models downloaded using TFAutoModel.from_pretrained()
{ "login": "wangyems", "id": 52801275, "node_id": "MDQ6VXNlcjUyODAxMjc1", "avatar_url": "https://avatars.githubusercontent.com/u/52801275?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wangyems", "html_url": "https://github.com/wangyems", "followers_url": "https://api.github.com/users/wangyems/followers", "following_url": "https://api.github.com/users/wangyems/following{/other_user}", "gists_url": "https://api.github.com/users/wangyems/gists{/gist_id}", "starred_url": "https://api.github.com/users/wangyems/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wangyems/subscriptions", "organizations_url": "https://api.github.com/users/wangyems/orgs", "repos_url": "https://api.github.com/users/wangyems/repos", "events_url": "https://api.github.com/users/wangyems/events{/privacy}", "received_events_url": "https://api.github.com/users/wangyems/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you provide an example code that didn't work?", "Providing the example for repo as below(e.g. let's try bert):\r\n`from transformers import AutoConfig, TFAutoModel, BertForPreTraining, load_tf_weights_in_bert`\r\n`model_name = \"bert-base-uncased\"`\r\n`config = AutoConfig.from_pretrained(model_name)`\r\n`tf_model = TFAutoModel.from_pretrained(model_name, config)#get the tf model using TFAutoModel`\r\n`tf_model.save_weights(\"./bert\")#save tf model to ckpt`\r\n`pt_model = BertForPreTraining(config)#init pt model`\r\n`load_tf_weights_in_bert(pt_model, config, \"./\")#convert tf ckpt to pt model`", "You should use `save_pretrained` and `from_pretrained` to do the conversion:\r\n\r\n```py\r\ntf_model.save_pretrained(\"./bert\")\r\npt_model = BertForPreTraining.from_pretrained(\"./bert\", from_tf=True)\r\n```", "Yes, the conversion above works for me. But it seems to me that load_tf_weights_in_bert() works only with limited Bert model if I want to convert tensorflow checkpoint to a pytorch model.", "The `load_tf_weights_in_bert` method is meant to be used to convert BERT models from the original implementation (google-research/bert), not to do the conversion between our architectures in PyTorch <> our architectures in TensorFlow.\r\n\r\nClosing as the conversion shown worked!", "Thanks! @LysandreJik " ]
1,601
1,602
1,602
CONTRIBUTOR
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I took a try of converting tf checkpoint to pytorch and it works well on the model that in the links on your [page](https://huggingface.co/transformers/converting_tensorflow_models.html) However, the conversion seems not working with models(bert, albert..) that downloaded using TFAutoModel.from_pretrained() I am wondering if I miss anything or those models are not currently supported? Thanks <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7547/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7547/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7546
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7546/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7546/comments
https://api.github.com/repos/huggingface/transformers/issues/7546/events
https://github.com/huggingface/transformers/issues/7546
713,869,908
MDU6SXNzdWU3MTM4Njk5MDg=
7,546
[s2s] label smoothing loss should be normalized
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649053, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted", "name": "Help wanted", "color": "008672", "default": false, "description": "Extra attention is needed, help appreciated" }, { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi, I'm interested in taking a look at this. Could you please point out where to start?", "1) On a branch try to add some logic to `label_smoothed_nll_loss` https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py#L35 such that changing `train_batch_size` doesn't wildly change loss.\r\n2) validate (or ask for help validating) that the change does not hurt fine-tuning performance.\r\n\r\nThe existing code is copied from `fairseq`, so we need to be fairly sure that the change does no harm before we merge it.\r\n\r\ncc @patil-suraj\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,601
1,610
1,610
CONTRIBUTOR
null
by the number of padding tokens in a batch. Currently, if you change `--train_batch_size` or `--max_target_length`, your loss value will scale wildly, making it hard to compare runs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7546/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7546/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7545
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7545/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7545/comments
https://api.github.com/repos/huggingface/transformers/issues/7545/events
https://github.com/huggingface/transformers/pull/7545
713,865,123
MDExOlB1bGxSZXF1ZXN0NDk3MTA0NTk3
7,545
[s2s] fix lockfile and peg distillation constants
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,601
1,601
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7545/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7545/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7545", "html_url": "https://github.com/huggingface/transformers/pull/7545", "diff_url": "https://github.com/huggingface/transformers/pull/7545.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7545.patch", "merged_at": 1601668694000 }
https://api.github.com/repos/huggingface/transformers/issues/7544
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7544/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7544/comments
https://api.github.com/repos/huggingface/transformers/issues/7544/events
https://github.com/huggingface/transformers/pull/7544
713,773,238
MDExOlB1bGxSZXF1ZXN0NDk3MDMxMTI5
7,544
Create Model Card For "abhilash1910/french-roberta" Model
{ "login": "abhilash1910", "id": 30946547, "node_id": "MDQ6VXNlcjMwOTQ2NTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/30946547?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhilash1910", "html_url": "https://github.com/abhilash1910", "followers_url": "https://api.github.com/users/abhilash1910/followers", "following_url": "https://api.github.com/users/abhilash1910/following{/other_user}", "gists_url": "https://api.github.com/users/abhilash1910/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhilash1910/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhilash1910/subscriptions", "organizations_url": "https://api.github.com/users/abhilash1910/orgs", "repos_url": "https://api.github.com/users/abhilash1910/repos", "events_url": "https://api.github.com/users/abhilash1910/events{/privacy}", "received_events_url": "https://api.github.com/users/abhilash1910/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Thanks for sharing!", "Thank you @julien-c !" ]
1,601
1,602
1,602
CONTRIBUTOR
null
# Model Card (Roberta MLM on French News Corpus) Model Card for [abhilash1910/french-roberta](https://huggingface.co/abhilash1910/french-roberta). Contains the model specification and important links which helped me create this . It uses the Roberta MLM on French New corpus (extracted from Leipzig).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7544/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7544/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7544", "html_url": "https://github.com/huggingface/transformers/pull/7544", "diff_url": "https://github.com/huggingface/transformers/pull/7544.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7544.patch", "merged_at": 1602102928000 }
https://api.github.com/repos/huggingface/transformers/issues/7543
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7543/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7543/comments
https://api.github.com/repos/huggingface/transformers/issues/7543/events
https://github.com/huggingface/transformers/issues/7543
713,747,283
MDU6SXNzdWU3MTM3NDcyODM=
7,543
Seq2SeqTrainer: missing features
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sshleifer \r\n\r\n2. Done #7532\r\n3. logging can be controlled using `--logging_steps`, default is 500\r\n4. I've also observed this, PL does something different I guess, final metrics should be same IMO\r\n" ]
1,601
1,602
1,602
CONTRIBUTOR
null
These could all be separate issues, if you want to tackle 1 feel free to make a new issue to link to your PR, or not! 1. Configure lr scheduler from the command line https://github.com/huggingface/transformers/blob/master/examples/lightning_base.py#L119 2. Configure dropout, layerdrop from the command line: https://github.com/huggingface/transformers/blob/master/examples/lightning_base.py#L92 3. Logging to wandb seems much less frequent than with the PL integration, which sends train loss to wandb every step. 4. Losses printed out are different than with PL, (seem to be normalized in some way). This merits investigation. cc @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7543/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7543/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7542
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7542/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7542/comments
https://api.github.com/repos/huggingface/transformers/issues/7542/events
https://github.com/huggingface/transformers/pull/7542
713,736,850
MDExOlB1bGxSZXF1ZXN0NDk3MDAyMDkw
7,542
Allow nested tensors in predicted logits
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,601
1,601
COLLABORATOR
null
# What does this PR do? Allow deep-nested list or tuple of tensors in the predicted logits of a model. Also excluded the past from those logits if we have a model using past states. <!-- Remove if not applicable --> Fixes #7539
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7542/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7542/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7542", "html_url": "https://github.com/huggingface/transformers/pull/7542", "diff_url": "https://github.com/huggingface/transformers/pull/7542.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7542.patch", "merged_at": 1601893995000 }
https://api.github.com/repos/huggingface/transformers/issues/7541
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7541/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7541/comments
https://api.github.com/repos/huggingface/transformers/issues/7541/events
https://github.com/huggingface/transformers/issues/7541
713,733,677
MDU6SXNzdWU3MTM3MzM2Nzc=
7,541
T5: forward and generate produce different results even for greedy decoding of a single token
{ "login": "Iwontbecreative", "id": 494951, "node_id": "MDQ6VXNlcjQ5NDk1MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/494951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Iwontbecreative", "html_url": "https://github.com/Iwontbecreative", "followers_url": "https://api.github.com/users/Iwontbecreative/followers", "following_url": "https://api.github.com/users/Iwontbecreative/following{/other_user}", "gists_url": "https://api.github.com/users/Iwontbecreative/gists{/gist_id}", "starred_url": "https://api.github.com/users/Iwontbecreative/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Iwontbecreative/subscriptions", "organizations_url": "https://api.github.com/users/Iwontbecreative/orgs", "repos_url": "https://api.github.com/users/Iwontbecreative/repos", "events_url": "https://api.github.com/users/Iwontbecreative/events{/privacy}", "received_events_url": "https://api.github.com/users/Iwontbecreative/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 2197722692, "node_id": "MDU6TGFiZWwyMTk3NzIyNjky", "url": "https://api.github.com/repos/huggingface/transformers/labels/t5", "name": "t5", "color": "509fc4", "default": false, "description": "" } ]
closed
false
null
[]
[ "Part discrepancy here is that during `generate` we put `decoder_start_token_id` at the front of the output, tell the model to predict the next token, then append that next token to the end of the output. \r\nFor `forward`, at the first position, we tell the model to predict the next token conditional on the previous token in `decoder_input_ids`, which should be `pad_token=0`, but we don't have any append step since everything is done in parallel.\r\n\r\nIf you ignore the leading zero, have you found examples where `generate` and `forward` produce different outputs?\r\n", "Yes, though for outputs of length >= 1, where I guess it would be expected since forward is not autoregressive whereas generate is.\r\n\r\nPlaying around with a few sentences, it seems like behaviors/results are the same, iff:\r\n`model.generate(input_ids=input_ids, max_length=2)` (1 breaks because it counts padding) and discard the first padding. This sounds reasonable (albeit a bit undocumented) and I guess the leading 0 is ignored when calling decode so most users don't run into this.\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Closing given discussion above," ]
1,601
1,607
1,607
CONTRIBUTOR
null
Tagging @sshleifer following earlier discussions. ## Environment info - `transformers` version: 3.2.0 (also 3.0.2) - Platform: Linux-4.4.0-17763-Microsoft-x86_64-with-glibc2.10 (WSL, but also on normal Ubuntu) - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: Fails independently of this - Using distributed or parallel set-up in script?: Fails independently of this ## Information I am using the T5 model to train on a seq2seq task. The model.forward() and model.generate() can differ, even for greedy decoding of a single token, since model.generate() seems to add a padding token before generating. This matters for "classification-style" tasks where we usually decode a single token (positive/negative for sentiment for instance). The problem arises when using: * [X] my own modified scripts: (give details below) Very similar to example: this code is enough: ```python from transformers import T5Tokenizer, T5ForConditionalGeneration import torch tokenizer = T5Tokenizer.from_pretrained("t5-small") model = T5ForConditionalGeneration.from_pretrained("t5-small") def run_comparison(input_sent): input_ids = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(input_sent)) input_ids = torch.tensor([input_ids]) target_ids = tokenizer.convert_tokens_to_ids(tokenizer.tokenize("chocolate </s>")) target_ids = torch.tensor([target_ids]) res_fwd = torch.argmax(model(input_ids=input_ids, labels=target_ids)[1], -1) res_gen = model.generate(input_ids=input_ids, max_length=2) print("Running comparison for %s" % input_sent) print("Using model.forward(): ", tokenizer.decode(res_fwd[0]), res_fwd[0]) print("Using model.generate(): ", tokenizer.decode(res_gen[0]), res_gen[0]) run_comparison("I love </s>") ``` Outputs: Running comparison for I love </s> Using model.forward(): and tensor([ 3, 11]) Using model.generate(): tensor([0, 3]) ## To reproduce Steps to reproduce the behavior: 1. Run code above. ## Expected behavior model.forward() and model.generate() give the same output for single token greedy decoding.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7541/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7541/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7540
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7540/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7540/comments
https://api.github.com/repos/huggingface/transformers/issues/7540/events
https://github.com/huggingface/transformers/issues/7540
713,691,270
MDU6SXNzdWU3MTM2OTEyNzA=
7,540
Difference between CLS hidden state and pooled_output
{ "login": "datistiquo", "id": 47474379, "node_id": "MDQ6VXNlcjQ3NDc0Mzc5", "avatar_url": "https://avatars.githubusercontent.com/u/47474379?v=4", "gravatar_id": "", "url": "https://api.github.com/users/datistiquo", "html_url": "https://github.com/datistiquo", "followers_url": "https://api.github.com/users/datistiquo/followers", "following_url": "https://api.github.com/users/datistiquo/following{/other_user}", "gists_url": "https://api.github.com/users/datistiquo/gists{/gist_id}", "starred_url": "https://api.github.com/users/datistiquo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/datistiquo/subscriptions", "organizations_url": "https://api.github.com/users/datistiquo/orgs", "repos_url": "https://api.github.com/users/datistiquo/repos", "events_url": "https://api.github.com/users/datistiquo/events{/privacy}", "received_events_url": "https://api.github.com/users/datistiquo/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Yes so BERT (the base model without any heads on top) outputs 2 things: `last_hidden_state `and `pooler_output`.\r\n\r\nFirst question:\r\n* `last_hidden_state` contains the hidden representations for each token in each sequence of the batch. So the size is `(batch_size, seq_len, hidden_size)`. \r\n* `pooler_output` contains a \"representation\" of each sequence in the batch, and is of size `(batch_size, hidden_size)`. What it basically does is take the hidden representation of the [CLS] token of each sequence in the batch (which is a vector of size `hidden_size`), and then run that through the [`BertPooler`](https://github.com/huggingface/transformers/blob/de4d7b004a24e4bb087eb46d742ea7939bc74644/src/transformers/modeling_bert.py#L498) nn.Module. This consists of a linear layer followed by a Tanh activation function. The weights of this linear layer are already pretrained on the next sentence prediction task (note that BERT is pretrained on 2 tasks: masked language modeling and next sentence prediction). I assume that the authors of the Transformers library have taken the weights from the original TF implementation, and initialized the layer with them. In theory, they would come from [`BertForPretraining`](https://github.com/huggingface/transformers/blob/de4d7b004a24e4bb087eb46d742ea7939bc74644/src/transformers/modeling_bert.py#L862) - which is BERT with the 2 pretraining heads on top. \r\n\r\nSecond question:\r\nYes you can fine-tune them, just like the hidden states, because the weights of the linear layer are updated when you perform a `loss.backward()`. \r\n\r\nBTW, please ask questions related to BERT/other models (which are not related to bugs) on the [forum](https://discuss.huggingface.co/), rather than posting them here.", "Thank you.\r\n\r\nAm I right that the TFBertForSequenceClassification just uses the pooled output of the main BERT model and puts it in a dropout and a dense layer with just 2 neurons?\r\n\r\nSince this model works very well for my use cases I try to extract the encodings of the bert model and just need to feed them in a simple dense layer to reduce prediction time...\r\n\r\nSo as I understand you, this pooling output stems from a classification head during pretraining? That is my confusiuon. Because I thought for such thing you use bert model for classficiation task with a head on top.\r\n\r\nSo if I would rebuil this sitautaion starting from just the bert model how would I intizialize my \"own\" BertPooler with the pretrained weights? So feedining pooled output to a dense layer with some pretrained weights like the TFBertForSequenceClassification model.\r\n\r\nWhy is actually the cls token used when it is not so good for tasks?\r\n\r\nI would like to use other poolings or taking the average? But I think this you can do with the output of the hidden sequences. Maybe I want to feed the averaged pooled hidden sequence to the BertPooler too?\r\n", "Yes, looking at the [source code](https://github.com/huggingface/transformers/blob/aba4e22944f0c985bebdcde51d47a565dd4f551d/src/transformers/modeling_tf_bert.py#L1080) of `TFBertForSequenceClassification`, they define a dropout layer, followed by a linear layer that outputs a vector of size `config.num_labels`. \r\n\r\nIn the [forward pass](https://github.com/huggingface/transformers/blob/aba4e22944f0c985bebdcde51d47a565dd4f551d/src/transformers/modeling_tf_bert.py#L1141), they use `outputs[1]`, meaning the output of the pooler layer (whose weights were pretrained on the next sentence classification task). This pooler layer takes the final hidden representation of the [CLS] token (this is a vector of size 768), then applies a linear layer and tanh to it, and then, we can apply the dropout layer and the linear layer which we defined in the `__init__` method. \r\n\r\nWhat is actually a bit confusing, is that in the docs they state the following about the pooler output: \"This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.\" (Source: https://huggingface.co/transformers/model_doc/bert.html#bertmodel)\r\n\r\nSo it's actually better to use `outputs[0]`, which are the hidden representations of all tokens, and take the average. But what actually also works well in practice is just using the hidden representation of the [CLS] token. The reason this [CLS] token is introduced is because it can be used for classification tasks. You can see the hidden representation of the [CLS] token as a representation of the whole sequence (sentence). Since `outputs[0]` is of size (batch_size, seq_len, hidden_size), and we only want the vector of the [CLS] token, we can obtain it by typing `outputs[0][:, 0, :]`. You can then apply a dropout layer, followed by a linear layer on top of that to get 2 outputs (in case you are doing binary text classification).\r\n\r\nSo, in practice this is what works well:\r\n\r\n```\r\nclass TFBertForSequenceClassification(TFBertPreTrainedModel, TFSequenceClassificationLoss):\r\n def __init__(self, config, *inputs, **kwargs):\r\n super().__init__(config, *inputs, **kwargs)\r\n\r\n self.num_labels = config.num_labels\r\n self.bert = TFBertMainLayer(config, name=\"bert\")\r\n self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)\r\n self.classifier = tf.keras.layers.Dense(\r\n config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name=\"classifier\"\r\n )\r\n\r\n @add_start_docstrings_to_callable(BERT_INPUTS_DOCSTRING.format(\"batch_size, sequence_length\"))\r\n @add_code_sample_docstrings(\r\n tokenizer_class=_TOKENIZER_FOR_DOC,\r\n checkpoint=\"bert-base-cased\",\r\n output_type=TFSequenceClassifierOutput,\r\n config_class=_CONFIG_FOR_DOC,\r\n )\r\n def call(\r\n self,\r\n inputs=None,\r\n attention_mask=None,\r\n token_type_ids=None,\r\n position_ids=None,\r\n head_mask=None,\r\n inputs_embeds=None,\r\n output_attentions=None,\r\n output_hidden_states=None,\r\n return_dict=None,\r\n labels=None,\r\n training=False,\r\n ):\r\n r\"\"\"\r\n labels (:obj:`tf.Tensor` of shape :obj:`(batch_size,)`, `optional`):\r\n Labels for computing the sequence classification/regression loss.\r\n Indices should be in :obj:`[0, ..., config.num_labels - 1]`.\r\n If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss),\r\n If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy).\r\n \"\"\"\r\n return_dict = return_dict if return_dict is not None else self.bert.return_dict\r\n\r\n if isinstance(inputs, (tuple, list)):\r\n labels = inputs[9] if len(inputs) > 9 else labels\r\n if len(inputs) > 9:\r\n inputs = inputs[:9]\r\n elif isinstance(inputs, (dict, BatchEncoding)):\r\n labels = inputs.pop(\"labels\", labels)\r\n\r\n outputs = self.bert(\r\n inputs,\r\n attention_mask=attention_mask,\r\n token_type_ids=token_type_ids,\r\n position_ids=position_ids,\r\n head_mask=head_mask,\r\n inputs_embeds=inputs_embeds,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n training=training,\r\n )\r\n\r\n last_hidden_state = outputs[0]\r\n cls_representation = last_hidden_state[:,0,:]\r\n pooled_output = self.dropout(cls_representation, training=training)\r\n logits = self.classifier(pooled_output)\r\n loss = None if labels is None else self.compute_loss(labels, logits)\r\n\r\n if not return_dict:\r\n output = (logits,) + outputs[2:]\r\n return ((loss,) + output) if loss is not None else output\r\n\r\n return TFSequenceClassifierOutput(\r\n loss=loss,\r\n logits=logits,\r\n hidden_states=outputs.hidden_states,\r\n attentions=outputs.attentions,\r\n )\r\n```\r\n\r\n\r\n\r\n", "I find this pooled_output very confusing because it is coming from somewhere from the \"deep\" and it breaks somehow the symmetry of the transformers.\r\n\r\nThat is the point of my initial question. Your above code modification is using the cls embedding. So, you could actually hjust still use the pooled_output, because it is actually the same? Ah, I forgot that the pooled_output is using the cls embeding too, but is fed to a tanh-layer then, right?\r\n\r\n\r\nTo Averaging: You mean something like GlobalAveragePooling, right? Then you have to take care about masking, right? Because for a sequence you get up to max lenght different embeddings but the actual sequence is just half along. So for averaging it is a good idea to using masking, right?\r\n", "I assume that using cls or averaging is better than pooled_output. Neverthelless I would want to try out pooled_output. So I wonder how to get pooled_output from a (finetuned) TFBertForSequenceClassification model?", "I just think that it is important to use the masking of the BERT outputs for averaging?\r\n\r\nhttps://discuss.huggingface.co/t/bert-output-for-padding-tokens/1550/2", "Yes you're right, you should only take into account those tokens which are not padding tokens if you want to average them. I'm gonna take a look and reply later!\r\n\r\n\r\n", "Just for interest I opened an issue:\r\n\r\nhttps://github.com/huggingface/transformers/issues/8148", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,601
1,610
1,610
NONE
null
Hi, The first ouput of the TFBertModel is last_hidden_state. And I assume the CLS embdding is the first element of this object, so last_hidden_state [0]? But then you have also the pooled_output. In the docs it is written that this comes from a linear layer on top. 1. This comes originally from the pre training. Can I imagine that this model for pre training is something like the BertForSequenceClassification? 2. Can this pooled_output be fine_tune when you fine tuning the weights of the BERT Model? I assume that this will be fixed and just the hidden states are fine tunable?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7540/reactions", "total_count": 19, "+1": 19, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7540/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7539
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7539/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7539/comments
https://api.github.com/repos/huggingface/transformers/issues/7539/events
https://github.com/huggingface/transformers/issues/7539
713,676,580
MDU6SXNzdWU3MTM2NzY1ODA=
7,539
Trainer fails to correctly tackle XLNetForSequenceClassification outputs
{ "login": "StepinSilence", "id": 25417535, "node_id": "MDQ6VXNlcjI1NDE3NTM1", "avatar_url": "https://avatars.githubusercontent.com/u/25417535?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StepinSilence", "html_url": "https://github.com/StepinSilence", "followers_url": "https://api.github.com/users/StepinSilence/followers", "following_url": "https://api.github.com/users/StepinSilence/following{/other_user}", "gists_url": "https://api.github.com/users/StepinSilence/gists{/gist_id}", "starred_url": "https://api.github.com/users/StepinSilence/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StepinSilence/subscriptions", "organizations_url": "https://api.github.com/users/StepinSilence/orgs", "repos_url": "https://api.github.com/users/StepinSilence/repos", "events_url": "https://api.github.com/users/StepinSilence/events{/privacy}", "received_events_url": "https://api.github.com/users/StepinSilence/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed, thanks for flagging this issue! The PR mentioned above should fix it.", "FYI for anyone else having this issue, I had the same issue using `Trainer.evaluate()` on a `T5ForConditionalGeneration` model.\r\nThe latest `transformers` version `3.3.1` was [released](https://github.com/huggingface/transformers/compare/v3.3.1...master) a few days before the [fix PR](https://github.com/huggingface/transformers/pull/7542), so looking forward to the next version with the fix :)\r\nthank you!" ]
1,601
1,602
1,601
NONE
null
## Environment info - `transformers` version: 3.3.1 - Platform: Linux-4.15.0-117-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: Yes, with CUDA_VISIBLE_DEVICES=0 - Using distributed or parallel set-up in script?: No ### Who can help @sgugger, @TevenLeScao ## Information Model I am using (Bert, XLNet ...): XLNet-base-cased The problem arises when using: * the official example scripts: ```text-classification/run_glue.py``` The tasks I am working on is: * an official GLUE/SQUaD task: SST-2 It seems that XLNetForSequenceClassification has different result outputs compared with other models, which makes the trainer fail to correctly tackle them. ## To reproduce Steps to reproduce the behavior: 1. Install ```transformers``` from master and download SST-2 data using ```download_glue_data.py``` 2. Create the following script ```bash GLUE_DIR=~/glue CUDA_VISIBLE_DEVICES=0 TASK_NAME=SST-2 python3 ~/applications/transformers/examples/text-classification/run_glue.py \ --model_name_or_path ~/xlnet \ --task_name $TASK_NAME \ --do_eval \ --data_dir $GLUE_DIR/$TASK_NAME \ --max_seq_length 64 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir ~/result/$TASK_NAME/ ``` 3. Run this script to make predictions ## Expected behavior Trainer should return the correct evaluation results like other models. ## Observed behavior ```bash 10/02/2020 22:33:53 - INFO - filelock - Lock 140365777899232 acquired on /data/home/liusishun/glue/SST-2/cached_dev_XLNetTokenizer_64_sst-2.lock 10/02/2020 22:33:53 - INFO - filelock - Lock 140365777899232 released on /data/home/liusishun/glue/SST-2/cached_dev_XLNetTokenizer_64_sst-2.lock 10/02/2020 22:33:56 - INFO - __main__ - *** Evaluate *** Evaluation: 0%| | 0/109 [00:00<?, ?it/s] Traceback (most recent call last): File "/data/home/liusishun/applications/transformers/examples/text-classification/run_glue.py", line 247, in <module> main() File "/data/home/liusishun/applications/transformers/examples/text-classification/run_glue.py", line 197, in main eval_result = trainer.evaluate(eval_dataset=eval_dataset) File "/data/home/liusishun/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1296, in evaluate output = self.prediction_loop(eval_dataloader, description="Evaluation") File "/data/home/liusishun/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1376, in prediction_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only) File "/data/home/liusishun/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1473, in prediction_step logits = tuple(logit.detach() for logit in logits) File "/data/home/liusishun/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1473, in <genexpr> logits = tuple(logit.detach() for logit in logits) AttributeError: 'tuple' object has no attribute 'detach' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7539/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7539/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7538
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7538/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7538/comments
https://api.github.com/repos/huggingface/transformers/issues/7538/events
https://github.com/huggingface/transformers/issues/7538
713,675,024
MDU6SXNzdWU3MTM2NzUwMjQ=
7,538
T5 supervised denoising task
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "@patrickvonplaten Any news?", "Yeah this looks good to me: \r\n\r\ninput_ids: I love <extra_id_0> and Mario.\r\ndecoder_input_ids: `decoder_start_token_id`\r\noutput/label: luca <EOS>", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,601
1,608
1,608
NONE
null
# 🚀 Feature request Hi everyone! I'm experimenting with T5 and I would like to fine-tune a specific pre-trained model of mine for tackling the 'filling the mask' task. To be clear, I have the following: I love \<mask> and Mario. Where \<mask> can be a single token or span. At the moment I framed the problem in this way: - input: I love <extra_id_0> and Mario. - output/label: luca The task I want to tackle is different from the canonical unsupervised one where I was able to perform it correctly. Do you think that the discussed framing presented above is enough? From the result that I got, it doesn't seem so.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7538/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7538/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7537
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7537/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7537/comments
https://api.github.com/repos/huggingface/transformers/issues/7537/events
https://github.com/huggingface/transformers/pull/7537
713,621,434
MDExOlB1bGxSZXF1ZXN0NDk2OTA3MTI1
7,537
Allow soft dependencies in the namespace with ImportErrors at use
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> FAISS_IMPORT_ERROR here ;)\r\n\r\nGood catch ;-)", "(we should have this for TF/PyTorch as well)" ]
1,601
1,601
1,601
COLLABORATOR
null
# What does this PR do? This PR aims at making errors due to soft dependency (like datasets) easier to understand for users. It aims at making everything available in the namespace and raising an import error at `init` or `from_pretained` <!-- Remove if not applicable --> Fixes #7536
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7537/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7537/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7537", "html_url": "https://github.com/huggingface/transformers/pull/7537", "diff_url": "https://github.com/huggingface/transformers/pull/7537.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7537.patch", "merged_at": 1601903525000 }
https://api.github.com/repos/huggingface/transformers/issues/7536
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7536/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7536/comments
https://api.github.com/repos/huggingface/transformers/issues/7536/events
https://github.com/huggingface/transformers/issues/7536
713,564,385
MDU6SXNzdWU3MTM1NjQzODU=
7,536
RAG model card code not working in Colab
{ "login": "rbownes", "id": 58034524, "node_id": "MDQ6VXNlcjU4MDM0NTI0", "avatar_url": "https://avatars.githubusercontent.com/u/58034524?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rbownes", "html_url": "https://github.com/rbownes", "followers_url": "https://api.github.com/users/rbownes/followers", "following_url": "https://api.github.com/users/rbownes/following{/other_user}", "gists_url": "https://api.github.com/users/rbownes/gists{/gist_id}", "starred_url": "https://api.github.com/users/rbownes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rbownes/subscriptions", "organizations_url": "https://api.github.com/users/rbownes/orgs", "repos_url": "https://api.github.com/users/rbownes/repos", "events_url": "https://api.github.com/users/rbownes/events{/privacy}", "received_events_url": "https://api.github.com/users/rbownes/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "You need to install `datasets` too for this model:\r\n```\r\n! pip install datasets\r\n```\r\nI'll work on some cleaner error messages.", "Thanks for responding @sgugger !\r\n\r\nSadly that didn't fix it:\r\n\r\n!pip install transformers\r\n!pip install datasets\r\n```\r\n\r\nNameError Traceback (most recent call last)\r\n\r\n<ipython-input-2-fcc46db034ee> in <module>()\r\n 2 \r\n 3 tokenizer = RagTokenizer.from_pretrained(\"facebook/rag-token-nq\")\r\n----> 4 retriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\n 5 model = RagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever)\r\n 6 \r\n\r\n2 frames\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/retrieval_rag.py in __init__(self, dataset_name, dataset_split, index_name, vector_size, index_path, use_dummy_dataset)\r\n 218 \r\n 219 logger.info(\"Loading passages from {}\".format(self.dataset_name))\r\n--> 220 self.dataset = load_dataset(\r\n 221 self.dataset_name, with_index=False, split=self.dataset_split, dummy=self.use_dummy_dataset\r\n 222 )\r\n\r\nNameError: name 'load_dataset' is not defined\r\n```\r\n\r\nStill resulted in the same error.", "Did you try restarting the colab? `datasets` requires that if I'm not mistaken.", "Problem persists after restarting. \r\n\r\nI tried it locally as well and I get the same error, but with a more verbose message.\r\n\r\n```\r\nNameError Traceback (most recent call last)\r\n<ipython-input-4-e0fde23b2cd7> in <module>\r\n 2\r\n 3 tokenizer = RagTokenizer.from_pretrained(\"facebook/rag-token-nq\")\r\n----> 4 retriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\n 5 model = RagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever)\r\n 6\r\n\r\n~/Library/Caches/pypoetry/virtualenvs/bbc-transformer-vt1pdFaV-py3.8/lib/python3.8/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)\r\n 306 question_encoder_tokenizer = rag_tokenizer.question_encoder\r\n 307 generator_tokenizer = rag_tokenizer.generator\r\n--> 308 return cls(\r\n 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer\r\n 310 )\r\n\r\n~/Library/Caches/pypoetry/virtualenvs/bbc-transformer-vt1pdFaV-py3.8/lib/python3.8/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)\r\n 281 )\r\n 282 if config.index_name == \"legacy\"\r\n--> 283 else HFIndex(\r\n 284 config.dataset,\r\n 285 config.dataset_split,\r\n\r\n~/Library/Caches/pypoetry/virtualenvs/bbc-transformer-vt1pdFaV-py3.8/lib/python3.8/site-packages/transformers/retrieval_rag.py in __init__(self, dataset_name, dataset_split, index_name, vector_size, index_path, use_dummy_dataset)\r\n 218\r\n 219 logger.info(\"Loading passages from {}\".format(self.dataset_name))\r\n--> 220 self.dataset = load_dataset(\r\n 221 self.dataset_name, with_index=False, split=self.dataset_split, dummy=self.use_dummy_dataset\r\n 222 )\r\n\r\nNameError: name 'load_dataset' is not defined\r\n```", "I think I found why while fixing the error message. This also needs the faiss library: `! pip install faiss`.", "@sgugger Confirmed.\r\n\r\n```\r\n!pip install transformers\r\n!pip install datasets\r\n!pip install faiss\r\n```\r\n\r\ngive the expected behaviour. Thank you!", "Working on having some clear error message for the next users in #7537 :-)\r\nThanks for flagging the problem!", "Thanks for the help! :) ", "@sgugger, I have the following versions of the packages installed : \r\ntransformers==3.3.1\r\ndatasets==1.1.2\r\nfaiss==1.5.3\r\nI still see the error.\r\nIt would be great if you could document which versions of faiss, datasets, and transformers works !", "I imported datasets to see if it helps. Didn't.", "on Google Colab, switch to a GPU runtime, then try with:\r\n`!pip install faiss-gpu`\r\nfinally restart the runtime.\r\n\r\nIt worked for me :)", "I have the same question, which is \r\n`ImportError Traceback (most recent call last)\r\n[<ipython-input-7-d8ba1013a0e5>](https://localhost:8080/#) in <module>()\r\n 6 \r\n 7 tokenizer = RagTokenizer.from_pretrained(\"facebook/rag-token-nq\")\r\n----> 8 retriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\n 9 model = RagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever)\r\n 10 \r\n\r\n1 frames\r\n[/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py](https://localhost:8080/#) in requires_backends(obj, backends)\r\n 846 failed = [msg.format(name) for available, msg in checks if not available()]\r\n 847 if failed:\r\n--> 848 raise ImportError(\"\".join(failed))\r\n 849 \r\n 850 \r\n\r\nImportError: \r\nRagRetriever requires the 🤗 Datasets library but it was not found in your environment. You can install it with:\r\n```\r\npip install datasets\r\n```\r\nIn a notebook or a colab, you can install it by executing a cell with\r\n```\r\n!pip install datasets\r\n```\r\nthen restarting your kernel.\r\n\r\nNote that if you have a local folder named `datasets` or a local python file named `datasets.py` in your current\r\nworking directory, python may try to import this instead of the 🤗 Datasets library. You should rename this folder or\r\nthat python file if that's the case.\r\n\r\nRagRetriever requires the faiss library but it was not found in your environment. Checkout the instructions on the\r\ninstallation page of its repo: https://github.com/facebookresearch/faiss/blob/master/INSTALL.md and follow the ones\r\nthat match your environment.\r\n\r\n\r\n---------------------------------------------------------------------------\r\nNOTE: If your import is failing due to a missing package, you can\r\nmanually install dependencies using either !pip or !apt.\r\n\r\nTo view examples of installing some common dependencies, click the\r\n\"Open Examples\" button below.\r\n---------------------------------------------------------------------------`\r\nbut either I add !pip install faiss-gpu or !pip install faiss is not useful.", "> @sgugger Confirmed.\r\n> \r\n> ```\r\n> !pip install transformers\r\n> !pip install datasets\r\n> !pip install faiss\r\n> ```\r\n> \r\n> give the expected behaviour. Thank you!\r\n\r\nI used above commands and it shows faiss is imported but still gives the import error for faiss.\r\n\r\nImportError Traceback (most recent call last)\r\n[<ipython-input-5-c04f039ae844>](https://localhost:8080/#) in <cell line: 4>()\r\n 2 \r\n 3 tokenizer = RagTokenizer.from_pretrained(\"facebook/rag-token-nq\")\r\n----> 4 retriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\n 5 model = RagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever)\r\n 6 \r\n\r\n1 frames\r\n[/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in requires_backends(obj, backends)\r\n 1012 failed = [msg.format(name) for available, msg in checks if not available()]\r\n 1013 if failed:\r\n-> 1014 raise ImportError(\"\".join(failed))\r\n 1015 \r\n 1016 \r\n\r\nImportError: \r\nRagRetriever requires the faiss library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/facebookresearch/faiss/blob/master/INSTALL.md and follow the ones\r\nthat match your environment. Please note that you may need to restart your runtime after installation.\r\n<img width=\"1002\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/75541422/2d1d974e-e292-4b5d-acbc-ec2ae644ef3f\">\r\n", "> > @sgugger Confirmed.\r\n> > ```\r\n> > !pip install transformers\r\n> > !pip install datasets\r\n> > !pip install faiss\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > give the expected behaviour. Thank you!\r\n> \r\n> I used above commands and it shows faiss is imported but still gives the import error for faiss.\r\n> \r\n> ImportError Traceback (most recent call last) in <cell line: 4>() 2 3 tokenizer = RagTokenizer.from_pretrained(\"facebook/rag-token-nq\") ----> 4 retriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True) 5 model = RagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever) 6\r\n> \r\n> 1 frames [/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in requires_backends(obj, backends) 1012 failed = [msg.format(name) for available, msg in checks if not available()] 1013 if failed: -> 1014 raise ImportError(\"\".join(failed)) 1015 1016\r\n> \r\n> ImportError: RagRetriever requires the faiss library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/facebookresearch/faiss/blob/master/INSTALL.md and follow the ones that match your environment. Please note that you may need to restart your runtime after installation. <img alt=\"image\" width=\"1002\" src=\"https://user-images.githubusercontent.com/75541422/248795710-2d1d974e-e292-4b5d-acbc-ec2ae644ef3f.png\">\r\n\r\nI am facing the same error. I tried the same solution, but it did not work", "Install both `faiss-cpu` and `faiss-gpu` while using GPU runtime in Colab solves the issue.", "> Install both `faiss-cpu` and `faiss-gpu` while using GPU runtime in Colab solves the issue.\r\n\r\nUnluckily it still doesn't work.\r\n![image](https://github.com/huggingface/transformers/assets/46794180/a1a661e5-5874-4881-86c0-d8ab20350470)\r\n\r\nError message is still the same:\r\n![image](https://github.com/huggingface/transformers/assets/46794180/335a170a-6438-42ba-be95-4be78cf3c8f8)\r\n\r\n", "> > Install both `faiss-cpu` and `faiss-gpu` while using GPU runtime in Colab solves the issue.\r\n> \r\n> Unluckily it still doesn't work. ![image](https://private-user-images.githubusercontent.com/46794180/292166379-a1a661e5-5874-4881-86c0-d8ab20350470.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTEiLCJleHAiOjE3MDMxNTQ0MDEsIm5iZiI6MTcwMzE1NDEwMSwicGF0aCI6Ii80Njc5NDE4MC8yOTIxNjYzNzktYTFhNjYxZTUtNTg3NC00ODgxLTg2YzAtZDhhYjIwMzUwNDcwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFJV05KWUFYNENTVkVINTNBJTJGMjAyMzEyMjElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjMxMjIxVDEwMjE0MVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWNlYjFjNGZmMTFkZDMzOWY0NjJjZTQ3ZjhiNjIzMzhlOWQ4ZDJkNGQ3YjFmY2U1NmYxMzcxMjUzYWViMjQ4ZTYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Ra-z56bokOuZndHRNhs-UMjpC0s2Ka12ZrBnFqOrIMw)\r\n> \r\n> Error message is still the same: ![image](https://private-user-images.githubusercontent.com/46794180/292166693-335a170a-6438-42ba-be95-4be78cf3c8f8.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTEiLCJleHAiOjE3MDMxNTQ0MDEsIm5iZiI6MTcwMzE1NDEwMSwicGF0aCI6Ii80Njc5NDE4MC8yOTIxNjY2OTMtMzM1YTE3MGEtNjQzOC00MmJhLWJlOTUtNGJlNzhjZjNjOGY4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFJV05KWUFYNENTVkVINTNBJTJGMjAyMzEyMjElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjMxMjIxVDEwMjE0MVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWRhNDk2ZDY5YWE0ZTZjM2I0NTExM2UzYTA4MzU0MjliYTZhY2E5YzJmNWQwNDYwNzhiMTgwMjIyNmUzNzMyMTYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.N7P_pNHiFITjD6pi2ZO6xJ2S8bHccXBHsGzgfyLcfqw)\r\n\r\nMine works fine so no idea what happened on yours lol. I'll leave my Colab notebook here: https://colab.research.google.com/drive/1xnTBsOZxG5hJJppz5ozbpdGxGx7MnDXw?usp=sharing", "Mine also works after restarting runtime. There was probably something wrong with dependencies", "I struggled with this for a bit. I think the key is that you need to !pip install ALL of your dependencies and then restart the session. Or it may just be that after 20 attempts something else magically worked itself out. Good luck!\r\n\r\n!pip install datasets evaluate transformers[sentencepiece]\r\n!pip install faiss-cpu\r\n!pip install faiss-gpu\r\n# need to do a Runtime -> Restart Session after this and the previous cell - ensure you are using a GPU" ]
1,601
1,705
1,601
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.3.1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @julien-c @VictorSanh ## Information Model I am using RAG The problem arises when using: * [X] the official example scripts: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Open a new colab notebook 2. !pip install transformers 3. execute the RAG model example code ```from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) input_dict = tokenizer.prepare_seq2seq_batch("who holds the record in 100m freestyle", return_tensors="pt") generated = model.generate(input_ids=input_dict["input_ids"]) print(tokenizer.batch_decode(generated, skip_special_tokens=True)[0]) ``` error traceback ``` NameError Traceback (most recent call last) <ipython-input-5-fcc46db034ee> in <module>() 2 3 tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") ----> 4 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) 5 model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) 6 2 frames /usr/local/lib/python3.6/dist-packages/transformers/retrieval_rag.py in __init__(self, dataset_name, dataset_split, index_name, vector_size, index_path, use_dummy_dataset) 218 219 logger.info("Loading passages from {}".format(self.dataset_name)) --> 220 self.dataset = load_dataset( 221 self.dataset_name, with_index=False, split=self.dataset_split, dummy=self.use_dummy_dataset 222 ) NameError: name 'load_dataset' is not defined ``` ## Expected behavior I would expect the model card example to be output: # should give michael phelps => sounds reasonable or something to this effect.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7536/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7536/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7535
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7535/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7535/comments
https://api.github.com/repos/huggingface/transformers/issues/7535/events
https://github.com/huggingface/transformers/issues/7535
713,522,786
MDU6SXNzdWU3MTM1MjI3ODY=
7,535
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' when running run_tf_text_classification.py
{ "login": "pvcastro", "id": 12713359, "node_id": "MDQ6VXNlcjEyNzEzMzU5", "avatar_url": "https://avatars.githubusercontent.com/u/12713359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pvcastro", "html_url": "https://github.com/pvcastro", "followers_url": "https://api.github.com/users/pvcastro/followers", "following_url": "https://api.github.com/users/pvcastro/following{/other_user}", "gists_url": "https://api.github.com/users/pvcastro/gists{/gist_id}", "starred_url": "https://api.github.com/users/pvcastro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pvcastro/subscriptions", "organizations_url": "https://api.github.com/users/pvcastro/orgs", "repos_url": "https://api.github.com/users/pvcastro/repos", "events_url": "https://api.github.com/users/pvcastro/events{/privacy}", "received_events_url": "https://api.github.com/users/pvcastro/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello!\r\n\r\nI think your dataset might have somewhere a row that is malformed. You should check this.", "Hi @jplu !\r\nI'll check mine, but I'm able to train with it using this tutorial here, adapted to mine: [https://huggingface.co/transformers/custom_datasets.html](https://huggingface.co/transformers/custom_datasets.html)\r\n\r\nAnd @Santosh-Gupta uses ChemProt and had the same problem, and ChemProt is an oficial biomedical benchmark, and he downloaded the training data directly from allenai repository.\r\n\r\nAnyway, I'll try running this same piece of code from the tensorflow script using a glue benchmark.", "Confirmed here...pointed to SST-2 train.csv and dev.csv and the same issue happened.\r\n[dev.txt](https://github.com/huggingface/transformers/files/5318339/dev.txt)\r\n[train.txt](https://github.com/huggingface/transformers/files/5318340/train.txt)\r\nRenamed them to .txt in order to upload here, but ran with the original names.\r\n", "Can you load your dataset with:\r\n\r\n```\r\nimport datasets\r\n\r\nfiles = {datasets.Split.TRAIN: \"train.csv\"}\r\nfiles[datasets.Split.VALIDATION] = \"dev.csv\"\r\nfiles[datasets.Split.TEST] = \"test.csv\"\r\n\r\ndatasets.load_dataset(\"csv\", data_file=files)\r\n```", "![image](https://user-images.githubusercontent.com/12713359/94936911-49d8ec00-04a5-11eb-821e-a99614e02dda.png)\r\nNo @jplu , that's exactly where I'm getting the error.", "Then it is an issue with the datasets package, can you post your issue there please https://github.com/huggingface/datasets", "Done, thanks!" ]
1,601
1,601
1,601
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 (installed from master) - Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @jplu ## Information Model I am using (Bert, XLNet ...): Bert (Portuguese version: neuralmind/bert-base-portuguese-cased) The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give name) * [x] my own task or dataset: (give details below) I'm testing my own text classification dataset, and for this I was trying to use this new script `run_tf_text_classification.py` script from transformers' examples. The dataset is split into train / dev / test, and in csv format, containing just a text and a label columns, using comma as sep. Here's a sample: ``` text,label "Registra-se a presença do acadêmico <name> . <REL_SEP> Ao me deparar com a descrição de dois autores no polo ativo da ação junto ao PJe , margem esquerda foi informado pela procuradora do reclamante que se trata de uma reclamação trabalhista individual . <REL_SEP> Diante disso , face a ausência injustificada do autor <name> , determina-se o ARQUIVAMENTO do presente processo , com relação a este , nos termos do [[ art . 844 da CLT ]] . <REL_SEP> CUSTAS AUTOR - DISPENSADO <REL_SEP> Custas pelo autor no importe de R $326,82 , calculadas sobre R $16.341,03 , dispensadas na forma da lei , em virtude da concessão dos benefícios da Justiça Gratuita , ora deferida . <REL_SEP> Cientes os presentes . <REL_SEP> Audiência encerrada às 8h42min . <REL_SEP> <name> <REL_SEP> Juíza do Trabalho <REL_SEP> Ata redigida por << <name> >> , Secretário de Audiência .",NO_RELATION ``` However, @Santosh-Gupta reported in #7351 that he had the exact same problem using the ChemProt dataset. His colab notebook is referenced in the following section. ## To reproduce Steps to reproduce the behavior: 1. Created a new conda environment using conda env -n transformers python=3.7 2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt 3. Installed tensorflow with `pip install tensorflow` 3. Ran `run_tf_text_classification.py` with the following parameters: ``` --train_file <DATASET_PATH>/train.csv \ --dev_file <DATASET_PATH>/dev.csv \ --test_file <DATASET_PATH>/test.csv \ --label_column_id 1 \ --model_name_or_path neuralmind/bert-base-portuguese-cased \ --output_dir <OUTPUT_PATH> \ --num_train_epochs 4 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --do_train \ --do_eval \ --do_predict \ --logging_steps 1000 \ --evaluate_during_training \ --save_steps 1000 \ --overwrite_output_dir \ --overwrite_cache ``` I have also copied [@Santosh-Gupta 's colab notebook](https://colab.research.google.com/drive/11APei6GjphCZbH5wD9yVlfGvpIkh8pwr?usp=sharing) as a reference. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Here is the stack trace: ``` 2020-10-02 07:33:41.622011: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 /media/discoD/repositorios/transformers_pedro/src/transformers/training_args.py:333: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options) FutureWarning, 2020-10-02 07:33:43.471648: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1 2020-10-02 07:33:43.471791: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.472664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.472684: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.472765: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.472809: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.472848: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.474209: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.474276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.561219: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.561397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.562345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.563219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:43.563595: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2020-10-02 07:33:43.570091: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3591830000 Hz 2020-10-02 07:33:43.570494: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560842432400 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:43.570511: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-10-02 07:33:43.570702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.571599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.571633: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.571645: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.571654: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.571664: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.571691: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.571704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.571718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.571770: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.572641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.573475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:47.139227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-10-02 07:33:47.139265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0 2020-10-02 07:33:47.139272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N 2020-10-02 07:33:47.140323: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.141248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) 2020-10-02 07:33:47.146317: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608b95dc5c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:47.146336: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1070, Compute Capability 6.1 10/02/2020 07:33:47 - INFO - __main__ - n_replicas: 1, distributed training: False, 16-bits training: False 10/02/2020 07:33:47 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct02_07-33-43_user-XPS-8700', logging_first_step=False, logging_steps=1000, save_steps=1000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False) 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 acquired on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 released on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock Using custom data configuration default Traceback (most recent call last): File "run_tf_text_classification.py", line 283, in <module> main() File "run_tf_text_classification.py", line 222, in main max_seq_length=data_args.max_seq_length, File "run_tf_text_classification.py", line 43, in get_tfds ds = datasets.load_dataset("csv", data_files=files) File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 604, in load_dataset **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 158, in __init__ **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 269, in _create_builder_config for key in sorted(data_files.keys()): TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' ``` ## Expected behavior Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7535/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7535/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7534
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7534/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7534/comments
https://api.github.com/repos/huggingface/transformers/issues/7534/events
https://github.com/huggingface/transformers/issues/7534
713,509,099
MDU6SXNzdWU3MTM1MDkwOTk=
7,534
The links to examples on the website don't work
{ "login": "ekdnam", "id": 40426312, "node_id": "MDQ6VXNlcjQwNDI2MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/40426312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekdnam", "html_url": "https://github.com/ekdnam", "followers_url": "https://api.github.com/users/ekdnam/followers", "following_url": "https://api.github.com/users/ekdnam/following{/other_user}", "gists_url": "https://api.github.com/users/ekdnam/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekdnam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekdnam/subscriptions", "organizations_url": "https://api.github.com/users/ekdnam/orgs", "repos_url": "https://api.github.com/users/ekdnam/repos", "events_url": "https://api.github.com/users/ekdnam/events{/privacy}", "received_events_url": "https://api.github.com/users/ekdnam/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hello! Indeed, you're right. This is because you're looking at an older version of the docs, and the files have since moved around. When clicking on a link, you should replace `master` with `v2.2.0` to get the correct link. For example:\r\n\r\n```\r\nhttps://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py\r\n```\r\n\r\nshould become:\r\n\r\n```\r\nhttps://github.com/huggingface/transformers/blob/v2.2.0/examples/run_lm_finetuning.py\r\n```\r\n\r\nSorry for the inconvenience.\r\n\r\nI don't think there's much we can do about older versions, but we could freeze the current and future scripts to tag versions cc @sgugger.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,601
1,607
1,607
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> I am looking at the examples part of the docs: https://huggingface.co/transformers/v2.2.0/examples.html The references to all the scripts on the web page don't work. for ex: [link to script](https://github.com/huggingface/transformers/blob/master/examples/run_ner.py) referenced [here](https://huggingface.co/transformers/v2.2.0/examples.html#named-entity-recognition) Thanks in advance! Here, - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7534/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7534/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7533
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7533/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7533/comments
https://api.github.com/repos/huggingface/transformers/issues/7533/events
https://github.com/huggingface/transformers/pull/7533
713,505,271
MDExOlB1bGxSZXF1ZXN0NDk2ODExNzk3
7,533
Add early stopping to trainer_tf.py
{ "login": "KMFODA", "id": 35491698, "node_id": "MDQ6VXNlcjM1NDkxNjk4", "avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KMFODA", "html_url": "https://github.com/KMFODA", "followers_url": "https://api.github.com/users/KMFODA/followers", "following_url": "https://api.github.com/users/KMFODA/following{/other_user}", "gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}", "starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions", "organizations_url": "https://api.github.com/users/KMFODA/orgs", "repos_url": "https://api.github.com/users/KMFODA/repos", "events_url": "https://api.github.com/users/KMFODA/events{/privacy}", "received_events_url": "https://api.github.com/users/KMFODA/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks @KMFODA!\r\n\r\nI'm not really in favor to do this manually as there are already a Keras callbacks taking care of this https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping that can monitor multiple values on which to base his early stop.\r\n\r\n@KMFODA if you want to add the feature to add Keras callbacks handling, it will be more than welcome :) As it is part of the features we plan to add in a near future.", "Seeing that I have close to no experience with TF, I won't be able to review this.", "Not a problem. @jplu I agree I prefer using Keras callbacks I just only have experience using it with Keras’s model.fit function. I’ll think and experiment with how to use it in a custom built TF model. Hopefully if successful, it should be fairly simple then to add early stopping based on custom metrics rather than just validation loss.", "I've just pushed the latest changes to trainer_tf.py that will use Keras's callbacks for early stopping rather than the manual solution I had initially submitted. Setting the callback in the `TFTrainer` function using a command such as this:\r\n\r\n`callbacks = [EarlyStopping(monitor='loss', patience=1, verbose=1)]\r\n`\r\n\r\nwill monitor the training loss and stop the model at the first epoch which fails to improve on the best training loss metric. Using the monitor variable we can also select metrics, other than the training loss, to carry out the early stopping on.", "Hi all, anything more I can do to help get this merged?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Bump @LysandreJik @jplu ", "We're moving away from the TFTrainer to fully integrate with Keras, so we won't add new functionality to the TFTrainer.", "> We're moving away from the TFTrainer to fully integrate with Keras, so we won't add new functionality to the TFTrainer.\r\n\r\nAlright, sounds good, This can be closed then?", "I'll let the original author close it :-)" ]
1,601
1,618
1,618
CONTRIBUTOR
null
## Summary This PR adds the early stopping feature to trainer_tf.py. ## Related Issues Alongside #4186 should close #4894. ## Who can review? @sgugger & @BramVanroy.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7533/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7533/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7533", "html_url": "https://github.com/huggingface/transformers/pull/7533", "diff_url": "https://github.com/huggingface/transformers/pull/7533.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7533.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7532
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7532/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7532/comments
https://api.github.com/repos/huggingface/transformers/issues/7532/events
https://github.com/huggingface/transformers/pull/7532
713,446,988
MDExOlB1bGxSZXF1ZXN0NDk2NzY0NDM2
7,532
[s2s] add config params like Dropout in Seq2SeqTrainingArguments
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, will add lr scheduler in separate PR ", "@sshleifer anything missing in this PR ?" ]
1,601
1,601
1,601
MEMBER
null
# What does this PR do? 1. Adds `config` params (`encoder_layerdrop`, `decoder_layerdrop`, `dropout`, `attention_dropout`) in `Seq2SeqTrainingArguments` 2. Fix T5 warnings (don't pass src_lang, tgt_lang args to `T5Tokenizer`) 3. Correct `vocab_size` for `FSMT`. 4. Fix `test_finetune_trainer_slow` 5. minor code cleanup in `Seq2SeqTrainer` @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7532/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7532/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7532", "html_url": "https://github.com/huggingface/transformers/pull/7532", "diff_url": "https://github.com/huggingface/transformers/pull/7532.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7532.patch", "merged_at": 1601829751000 }
https://api.github.com/repos/huggingface/transformers/issues/7531
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7531/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7531/comments
https://api.github.com/repos/huggingface/transformers/issues/7531/events
https://github.com/huggingface/transformers/issues/7531
713,427,970
MDU6SXNzdWU3MTM0Mjc5NzA=
7,531
Cammembert fine tuning from checkpoint
{ "login": "UrszulaCzerwinska", "id": 3660462, "node_id": "MDQ6VXNlcjM2NjA0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/3660462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/UrszulaCzerwinska", "html_url": "https://github.com/UrszulaCzerwinska", "followers_url": "https://api.github.com/users/UrszulaCzerwinska/followers", "following_url": "https://api.github.com/users/UrszulaCzerwinska/following{/other_user}", "gists_url": "https://api.github.com/users/UrszulaCzerwinska/gists{/gist_id}", "starred_url": "https://api.github.com/users/UrszulaCzerwinska/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/UrszulaCzerwinska/subscriptions", "organizations_url": "https://api.github.com/users/UrszulaCzerwinska/orgs", "repos_url": "https://api.github.com/users/UrszulaCzerwinska/repos", "events_url": "https://api.github.com/users/UrszulaCzerwinska/events{/privacy}", "received_events_url": "https://api.github.com/users/UrszulaCzerwinska/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hello, I still didn't solve the problem... anyone?", "It seems the tokenizer you've provided cannot be loaded. You've provided `--tokenizer_name=\"./sentencepiece.bpe.model\"` which is a path to a file, and cannot work with an AutoTokenizer.\r\n\r\nI recommend you put the tokenizer file in the same folder as your model, so that it can know the model type, and therefore the tokenizer type, from the configuration.\r\n\r\nAlso, there was an issue a few months back where the tokenizer wouldn't be saved by the script, and you would have to specify it like you just did. We patched this issue since, so I invite you to use the [`run_mlm.py` script](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) instead, alongside upgrading your `transformers` version to the latest one. Thank you for your understanding.", "Ok, thank you I will check it out" ]
1,601
1,616
1,616
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-28-generic-x86_64-with-debian-buster-sid - Python version: 3.7.3 - PyTorch version (GPU?): 1.3.1 (True) - Tensorflow version (GPU?): 2.0.0 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: no ### Who can help albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger ## Information Model I am using camembert: The problem arises when using: * [X] the official example scripts: run_language_modeling.py The tasks I am working on is: * [X ] an official GLUE/SQUaD task: LM fine tuning * [X ] my own task or dataset: working with my own data corpus ## To reproduce Steps to reproduce the behavior: 1. run the script run_language_modeling.py was first run "from scratch" starting with cammembert model with my own data. 2. launching the same script but starting from checkpoint and the same data to continue training ```bash python run_language_modeling.py --output_dir=output2 --model_name_or_path="./LM/fine_tune_cammembert/output/checkpoint-27500" --tokenizer_name="./sentencepiece.bpe.model" --do_train --train_data_file=corpus_camambert/train_1.txt --do_eval --eval_data_file=corpus_camambert/test_valid.txt --mlm --line_by_line --evaluate_during_training --overwrite_output_dir ``` ## What happens ```bash Traceback (most recent call last): File "run_language_modeling.py", line 281, in <module> main() File "run_language_modeling.py", line 186, in main tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir) File "/home/ccass/anaconda3/lib/python3.7/site-packages/transformers/tokenization_auto.py", line 209, in from_pretrained config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) File "/home/ccass/anaconda3/lib/python3.7/site-packages/transformers/configuration_auto.py", line 272, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/ccass/anaconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 355, in get_config_dict config_dict = cls._dict_from_json_file(resolved_config_file) File "/home/ccass/anaconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 437, in _dict_from_json_file text = reader.read() File "/home/ccass/anaconda3/lib/python3.7/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfe in position 51: invalid start byte ``` ## Expected behavior Continue Taining
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7531/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7530
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7530/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7530/comments
https://api.github.com/repos/huggingface/transformers/issues/7530/events
https://github.com/huggingface/transformers/issues/7530
713,398,556
MDU6SXNzdWU3MTMzOTg1NTY=
7,530
ELECTRA - some weights are not loaded
{ "login": "mdocekal", "id": 14943272, "node_id": "MDQ6VXNlcjE0OTQzMjcy", "avatar_url": "https://avatars.githubusercontent.com/u/14943272?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mdocekal", "html_url": "https://github.com/mdocekal", "followers_url": "https://api.github.com/users/mdocekal/followers", "following_url": "https://api.github.com/users/mdocekal/following{/other_user}", "gists_url": "https://api.github.com/users/mdocekal/gists{/gist_id}", "starred_url": "https://api.github.com/users/mdocekal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mdocekal/subscriptions", "organizations_url": "https://api.github.com/users/mdocekal/orgs", "repos_url": "https://api.github.com/users/mdocekal/repos", "events_url": "https://api.github.com/users/mdocekal/events{/privacy}", "received_events_url": "https://api.github.com/users/mdocekal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, indeed, this can scare users even if there is no actual problem. #7569 will fix this." ]
1,601
1,601
1,601
NONE
null
## Environment info - `transformers` version: 3.3.1 - Platform: Linux-4.15.0-118-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help albert, bert, GPT2, XLM: @LysandreJik Model Cards: @julien-c ## Information Model I am using: ELECTRA I am getting a warning: > Some weights of the model checkpoint at google/electra-large-discriminator were not used when initializing ElectraModel: ['electra.embeddings_project.weight', 'electra.embeddings_project.bias'] > - This IS expected if you are initializing ElectraModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). > - This IS NOT expected if you are initializing ElectraModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). when using the AutoModel.from_pretrained for google/electra-base-discriminator or google/electra-large-discriminator. There is no warning for google/electra-small-discriminator. The problem remains the same when directly using the ElectraModel.from_pretrained method. ## To reproduce ``` import transformers m=transformers.AutoModel.from_pretrained("google/electra-large-discriminator") # or m=transformers.AutoModel.from_pretrained("google/electra-base-discriminator") ``` ## Expected behavior no warning
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7530/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7530/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7529
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7529/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7529/comments
https://api.github.com/repos/huggingface/transformers/issues/7529/events
https://github.com/huggingface/transformers/issues/7529
713,238,914
MDU6SXNzdWU3MTMyMzg5MTQ=
7,529
[GPT-2] How many columns in LM model wte layer are positional embeddings?
{ "login": "changyeli", "id": 9058204, "node_id": "MDQ6VXNlcjkwNTgyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9058204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/changyeli", "html_url": "https://github.com/changyeli", "followers_url": "https://api.github.com/users/changyeli/followers", "following_url": "https://api.github.com/users/changyeli/following{/other_user}", "gists_url": "https://api.github.com/users/changyeli/gists{/gist_id}", "starred_url": "https://api.github.com/users/changyeli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/changyeli/subscriptions", "organizations_url": "https://api.github.com/users/changyeli/orgs", "repos_url": "https://api.github.com/users/changyeli/repos", "events_url": "https://api.github.com/users/changyeli/events{/privacy}", "received_events_url": "https://api.github.com/users/changyeli/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I would say that none are. The positions are managed by the position embedding: `transformer.wpe.weight`." ]
1,601
1,601
1,601
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> Hello everyone, I have a quick question about the token weight matrix from the GPT-2 model. The transformers documentation for GPT-2 indicates that > GPT-2 is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left. How many columns in the ```transformer.wte.weight``` are linked to the positional embeddings? For the GPT-2 small model, the size of embedding matrix is (50257, 768), in those 768 columns, how many of them are linked to the positional embeddings? Many thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7529/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7529/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7528
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7528/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7528/comments
https://api.github.com/repos/huggingface/transformers/issues/7528/events
https://github.com/huggingface/transformers/issues/7528
713,215,895
MDU6SXNzdWU3MTMyMTU4OTU=
7,528
QA pipeline fails with long context.
{ "login": "brian8128", "id": 10691563, "node_id": "MDQ6VXNlcjEwNjkxNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/10691563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brian8128", "html_url": "https://github.com/brian8128", "followers_url": "https://api.github.com/users/brian8128/followers", "following_url": "https://api.github.com/users/brian8128/following{/other_user}", "gists_url": "https://api.github.com/users/brian8128/gists{/gist_id}", "starred_url": "https://api.github.com/users/brian8128/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brian8128/subscriptions", "organizations_url": "https://api.github.com/users/brian8128/orgs", "repos_url": "https://api.github.com/users/brian8128/repos", "events_url": "https://api.github.com/users/brian8128/events{/privacy}", "received_events_url": "https://api.github.com/users/brian8128/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This was fixed in a more recent version." ]
1,601
1,601
1,601
NONE
null
## Environment info - `transformers` version: 3.0.2n your GitHub issue and FILL OUT the two last points. - Platform: Linux-4.15.0-117-generic-x86_64-with-glibc2.10 - Python version: 3.8.2 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Same result either way - Using distributed or parallel set-up in script?: no ### Who can help @sgugger ## Information Model I am using: DistilBert via the QA pipeline. The tasks I am working on is: * my own task or dataset: ## To reproduce ``` from transformers import pipeline nlp = pipeline("question-answering") context = """ Once upon a midnight dreary, while I pondered, weak and weary, Over many a quaint and curious volume of forgotten lore— While I nodded, nearly napping, suddenly there came a tapping, As of some one gently rapping, rapping at my chamber door. “’Tis some visitor,” I muttered, “tapping at my chamber door— Only this and nothing more.” Ah, distinctly I remember it was in the bleak December; And each separate dying ember wrought its ghost upon the floor. Eagerly I wished the morrow;—vainly I had sought to borrow From my books surcease of sorrow—sorrow for the lost Lenore— For the rare and radiant maiden whom the angels name Lenore— Nameless here for evermore. And the silken, sad, uncertain rustling of each purple curtain Thrilled me—filled me with fantastic terrors never felt before; So that now, to still the beating of my heart, I stood repeating “’Tis some visitor entreating entrance at my chamber door— Some late visitor entreating entrance at my chamber door;— This it is and nothing more.” Presently my soul grew stronger; hesitating then no longer, “Sir,” said I, “or Madam, truly your forgiveness I implore; But the fact is I was napping, and so gently you came rapping, And so faintly you came tapping, tapping at my chamber door, That I scarce was sure I heard you”—here I opened wide the door;— Darkness there and nothing more. Deep into that darkness peering, long I stood there wondering, fearing, Doubting, dreaming dreams no mortal ever dared to dream before; But the silence was unbroken, and the stillness gave no token, And the only word there spoken was the whispered word, “Lenore?” This I whispered, and an echo murmured back the word, “Lenore!”— Merely this and nothing more. Back into the chamber turning, all my soul within me burning, Soon again I heard a tapping somewhat louder than before. “Surely,” said I, “surely that is something at my window lattice; Let me see, then, what thereat is, and this mystery explore— Let my heart be still a moment and this mystery explore;— ’Tis the wind and nothing more!” """ nlp(question="What is the month?", context=context) /home/brian/transformers/src/transformers/tokenization_utils_base.py:1292: FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead. warnings.warn( --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-8-d64d967fe1dd> in <module> ----> 1 nlp(question="What is the month?", context=context) ~/transformers/src/transformers/pipelines.py in __call__(self, *args, **kwargs) 1636 with torch.no_grad(): 1637 # Retrieve the score for the context tokens only (removing question tokens) -> 1638 fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()} 1639 start, end = self.model(**fw_args)[:2] 1640 start, end = start.cpu().numpy(), end.cpu().numpy() ~/transformers/src/transformers/pipelines.py in <dictcomp>(.0) 1636 with torch.no_grad(): 1637 # Retrieve the score for the context tokens only (removing question tokens) -> 1638 fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()} 1639 start, end = self.model(**fw_args)[:2] 1640 start, end = start.cpu().numpy(), end.cpu().numpy() ValueError: expected sequence of length 384 at dim 1 (got 379) ``` ## Expected behavior ``` {'score': 0.9419336915016174, 'start': 401, 'end': 410, 'answer': 'December;'} ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7528/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7527
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7527/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7527/comments
https://api.github.com/repos/huggingface/transformers/issues/7527/events
https://github.com/huggingface/transformers/pull/7527
713,205,458
MDExOlB1bGxSZXF1ZXN0NDk2NTY0Mzcy
7,527
Check and update model list in index.rst automatically
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'll let you rebase!" ]
1,601
1,601
1,601
COLLABORATOR
null
# What does this PR do? We currently have two lists of the models to maintain (in the main README and in the `index.rst` for the docs) which is painful. This script checks the mode list in the index.rst is a proper converted copy of the model list in the README and can also fix it with the command `make fix-copies` (same API as for the copies of parts of the model). It also enforces our maximum character per line so that the `index.rst` is still readable in an editor.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7527/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7527", "html_url": "https://github.com/huggingface/transformers/pull/7527", "diff_url": "https://github.com/huggingface/transformers/pull/7527.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7527.patch", "merged_at": 1601905245000 }
https://api.github.com/repos/huggingface/transformers/issues/7526
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7526/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7526/comments
https://api.github.com/repos/huggingface/transformers/issues/7526/events
https://github.com/huggingface/transformers/issues/7526
713,197,449
MDU6SXNzdWU3MTMxOTc0NDk=
7,526
Almost Have Model Parallelism Working on GPT2 Fine-Tuning
{ "login": "alexorona", "id": 11825654, "node_id": "MDQ6VXNlcjExODI1NjU0", "avatar_url": "https://avatars.githubusercontent.com/u/11825654?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexorona", "html_url": "https://github.com/alexorona", "followers_url": "https://api.github.com/users/alexorona/followers", "following_url": "https://api.github.com/users/alexorona/following{/other_user}", "gists_url": "https://api.github.com/users/alexorona/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexorona/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexorona/subscriptions", "organizations_url": "https://api.github.com/users/alexorona/orgs", "repos_url": "https://api.github.com/users/alexorona/repos", "events_url": "https://api.github.com/users/alexorona/events{/privacy}", "received_events_url": "https://api.github.com/users/alexorona/received_events", "type": "User", "site_admin": false }
[ { "id": 2627272588, "node_id": "MDU6TGFiZWwyNjI3MjcyNTg4", "url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20Parallel", "name": "Model Parallel", "color": "8B66A5", "default": false, "description": "Model Parallelilsm Implementations" } ]
closed
false
null
[]
[ "Hey @alexorona, it's great that you are working on model parallelism! Could you open a PR with the proposed changes to GPT2 and maybe post a code snippet to reproduce your error with the code in your PR?\r\n\r\nI'm happy to take a look :-) ", "@patrickvonplaten An update: I managed to get around the problem by carefully following every tensor in the GPT2 model and had to place the `lm_head` on the first layer because the `wte` layer is used by it. Model parallelism is now working and c confirmed with nvidia-smi: tensors are moving appropriately and well-balanced across the GPUs and models are training. It's not useful at all to create a PR right now: I'm using a version of transformers that's probably a month old and the code is barely holding together.\r\n\r\nI'd like to get the latest (and hopefully last) functional challenge solved before putting together a PR. This latest problem is extremely challenging. Only someone with a very deep knowledge of the transformers implement of the `Attention` class, `Trainer` and possibly `modeling_utils.py` can provide an intuition as to what's happening. \r\n\r\nHere's the problem: The same model on the same GPU with the same token size consumes more memory while training if there are more GPUs. For example, the first attention block will consume 2.2 GB of GPU memory on a Tesla v100 if there are 4 Tesla v100s on the instance. Meanwhile, the same block will consume 4.2 GB of GPU memory on a Tesla v100 if there are 8 Tesla v100s on the instance. It makes no sense. I believe the behavior is coming from `Attention._attn`. Does anyone know whether there's something in the implementation that would cause tensors to use up more GPU memory if more GPUs are added? Note: I've disabled all of the data parallelism in `Trainer`, which would be the obvious source.\r\n\r\nSome additional details:\r\n```\r\n# Running gpt-xl on 4 GPUs. Model uses 2.2 GB of memory per attention block.\r\nBlock: 0\r\nTotal GPU Memory Usage: 1.40915456\r\nBlock: 1\r\nTotal GPU Memory Usage:3.604413952\r\nBlock: 2\r\nTotal GPU Memory Usage:5.803867648\r\nBlock: 3\r\nTotal GPU Memory Usage:8.003321344\r\nBlock: 4\r\nTotal GPU Memory Usage: 10.20277504\r\nBlock: 5\r\nTotal GPU Memory Usage: 12.402228736\r\nBlock: 6\r\nTotal GPU Memory Usage: 14.601682432\r\n```\r\n\r\n```\r\n# Running gpt-xl on 8 GPUs. Model uses 4.2 GB of memory per attention block.\r\nBlock: 0\r\nTotal GPU Memory Usage: 1.468251648\r\nBlock: 1\r\nTotal GPU Memory Usage: 5.847236096\r\nBlock: 2\r\nTotal GPU Memory Usage: 10.226220544\r\nBlock: 3\r\nTotal GPU Memory Usage: 14.605204992\r\n```\r\n\r\n```\r\nclass GPT2Model(GPT2PreTrainedModel):\r\n def __init__(self, config, layers_map):\r\n super().__init__(config)\r\n\r\n self.wte = nn.Embedding(config.vocab_size, config.n_embd)\r\n self.wpe = nn.Embedding(config.n_positions, config.n_embd)\r\n self.drop = nn.Dropout(config.embd_pdrop)\r\n self.h = nn.ModuleList([Block(config.n_ctx, config, scale=True) for _ in range(config.n_layer)])\r\n\r\n # Layers map for 4 GPUs\r\n self.layers_map = {0: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\r\n 1: [11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23],\r\n 2: [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36],\r\n 3: [37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]}\r\n\r\n self.wte = self.wte.to('cuda:' + str(min(self.layers_map.keys())))\r\n self.wpe = self.wpe.to('cuda:' + str(min(self.layers_map.keys())))\r\n self.drop = self.drop.cuda('cuda:' + str(min(self.layers_map.keys())))\r\n\r\n def forward(\r\n self,\r\n input_ids=None,\r\n past_key_values=None,\r\n attention_mask=None,\r\n token_type_ids=None,\r\n position_ids=None,\r\n head_mask=None,\r\n inputs_embeds=None,\r\n encoder_hidden_states=None,\r\n encoder_attention_mask=None,\r\n use_cache=None,\r\n output_attentions=None,\r\n output_hidden_states=None,\r\n return_dict=None,\r\n **kwargs,\r\n ):\r\n # Skipping over some details in the forward method\r\n\r\n for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)):\r\n print('Block:', i)\r\n gpu_memory = torch.cuda.memory_allocated(device = hidden_states.device)/(1e+9)\r\n print(\"GPU Memory:\", gpu_memory)\r\n if output_hidden_states:\r\n print('output hidden shapes us true')\r\n all_hidden_states = all_hidden_states + (hidden_states.view(*output_shape),)\r\n \r\n if layer_past is not None:\r\n layer_past = layer_past.cuda(hidden_states.device)\r\n\r\n if attention_mask is not None:\r\n attention_mask = attention_mask.to(hidden_states.device)\r\n del outputs\r\n outputs = block(\r\n hidden_states,\r\n layer_past=layer_past,\r\n attention_mask=attention_mask,\r\n head_mask=head_mask[i],\r\n encoder_hidden_states=encoder_hidden_states,\r\n encoder_attention_mask=encoder_attention_mask,\r\n use_cache=use_cache,\r\n output_attentions=output_attentions,\r\n )\r\n\r\n hidden_states, present = outputs[:2]\r\n\r\n if use_cache is True:\r\n presents = presents + (present,)\r\n\r\n if output_attentions:\r\n all_attentions = all_attentions + (outputs[2],)\r\n \r\n for k,v in self.layers_map.items():\r\n if i == v[-1] and k != max(self.layers_map.keys()):\r\n hidden_states = hidden_states.to('cuda:' + str(k + 1))\r\nclass Block(nn.Module):\r\n def __init__(self, n_ctx, config, scale=True):\r\n super().__init__()\r\n hidden_size = config.n_embd\r\n inner_dim = config.n_inner if config.n_inner is not None else 4 * hidden_size\r\n self.ln_1 = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon)\r\n self.attn = Attention(hidden_size, n_ctx, config, scale)\r\n self.ln_2 = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon)\r\n if config.add_cross_attention:\r\n self.crossattention = Attention(hidden_size, n_ctx, config, scale, is_cross_attention=True)\r\n self.ln_cross_attn = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon)\r\n self.mlp = MLP(inner_dim, config)\r\n\r\n def forward(\r\n self,\r\n hidden_states,\r\n layer_past=None,\r\n attention_mask=None,\r\n head_mask=None,\r\n encoder_hidden_states=None,\r\n encoder_attention_mask=None,\r\n use_cache=False,\r\n output_attentions=False,\r\n ):\r\n attn_outputs = self.attn(\r\n self.ln_1(hidden_states),\r\n layer_past=layer_past,\r\n attention_mask=attention_mask,\r\n head_mask=head_mask,\r\n use_cache=use_cache,\r\n output_attentions=output_attentions,\r\n )\r\n attn_output = attn_outputs[0] # output_attn: a, present, (attentions)\r\n outputs = attn_outputs[1:]\r\n # residual connection\r\n hidden_states = attn_output + hidden_states\r\n\r\n if encoder_hidden_states is not None:\r\n # add one self-attention block for cross-attention\r\n assert hasattr(\r\n self, \"crossattention\"\r\n ), f\"If `encoder_hidden_states` are passed, {self} has to be instantiated with cross-attention layers by setting `config.add_cross_attention=True`\"\r\n cross_attn_outputs = self.crossattention(\r\n self.ln_cross_attn(hidden_states),\r\n attention_mask=attention_mask,\r\n head_mask=head_mask,\r\n encoder_hidden_states=encoder_hidden_states,\r\n encoder_attention_mask=encoder_attention_mask,\r\n output_attentions=output_attentions,\r\n )\r\n attn_output = cross_attn_outputs[0]\r\n # residual connection\r\n hidden_states = hidden_states + attn_output\r\n outputs = outputs + cross_attn_outputs[1:] # add cross attentions if we output attention weights\r\n\r\n feed_forward_hidden_states = self.mlp(self.ln_2(hidden_states))\r\n # residual connection\r\n hidden_states = hidden_states + feed_forward_hidden_states\r\n\r\n outputs = [hidden_states] + outputs\r\n return outputs # hidden_states, present, (cross_attentions, attentions)\r\n\r\nclass Attention(nn.Module):\r\n def __init__(self, nx, n_ctx, config, scale=False, is_cross_attention=False):\r\n super().__init__()\r\n\r\n n_state = nx # in Attention: n_state=768 (nx=n_embd)\r\n # [switch nx => n_state from Block to Attention to keep identical to TF implem]\r\n assert n_state % config.n_head == 0\r\n self.register_buffer(\r\n \"bias\", torch.tril(torch.ones((n_ctx, n_ctx), dtype=torch.uint8)).view(1, 1, n_ctx, n_ctx)\r\n )\r\n self.register_buffer(\"masked_bias\", torch.tensor(-1e4))\r\n self.n_head = config.n_head\r\n self.split_size = n_state\r\n self.scale = scale\r\n self.is_cross_attention = is_cross_attention\r\n if self.is_cross_attention:\r\n self.c_attn = Conv1D(2 * n_state, nx)\r\n self.q_attn = Conv1D(n_state, nx)\r\n else:\r\n self.c_attn = Conv1D(3 * n_state, nx)\r\n self.c_proj = Conv1D(n_state, nx)\r\n self.attn_dropout = nn.Dropout(config.attn_pdrop)\r\n self.resid_dropout = nn.Dropout(config.resid_pdrop)\r\n self.pruned_heads = set()\r\n self.softmax = nn.Softmax(dim=-1)\r\n\r\n def prune_heads(self, heads):\r\n if len(heads) == 0:\r\n return\r\n heads, index = find_pruneable_heads_and_indices(\r\n heads, self.n_head, self.split_size // self.n_head, self.pruned_heads\r\n )\r\n index_attn = torch.cat([index, index + self.split_size, index + (2 * self.split_size)])\r\n\r\n # Prune conv1d layers\r\n self.c_attn = prune_conv1d_layer(self.c_attn, index_attn, dim=1)\r\n self.c_proj = prune_conv1d_layer(self.c_proj, index, dim=0)\r\n\r\n # Update hyper params\r\n self.split_size = (self.split_size // self.n_head) * (self.n_head - len(heads))\r\n self.n_head = self.n_head - len(heads)\r\n self.pruned_heads = self.pruned_heads.union(heads)\r\n\r\n def _attn(self, q, k, v, attention_mask=None, head_mask=None, output_attentions=False):\r\n w = torch.matmul(q, k)\r\n if self.scale:\r\n w = w / (float(v.size(-1)) ** 0.5)\r\n nd, ns = w.size(-2), w.size(-1)\r\n\r\n if not self.is_cross_attention:\r\n # if only \"normal\" attention layer implements causal mask\r\n mask = self.bias[:, :, ns - nd : ns, :ns]\r\n mask = mask.to(w.device)\r\n \r\n self.masked_bias = self.masked_bias.to(w.device)\r\n w = torch.where(mask.bool(), w, self.masked_bias.to(w.dtype))\r\n if attention_mask is not None:\r\n # Apply the attention mask\r\n w = w + attention_mask\r\n w = self.softmax(w)\r\n w = self.attn_dropout(w)\r\n\r\n # Mask heads if we want to\r\n if head_mask is not None:\r\n w = w * head_mask\r\n\r\n outputs = [torch.matmul(w, v)]\r\n if output_attentions:\r\n outputs.append(w)\r\n del mask, nd, ns, v, q, k, attention_mask, head_mask, output_attentions, w\r\n torch.cuda.synchronize()\r\n torch.cuda.empty_cache()\r\n torch.cuda.reset_max_memory_allocated()\r\n return outputs\r\n\r\n# Layers map for 4 x Tesla v100 GPUs\r\nlayers_map = {0: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],\r\n 1: [11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23],\r\n 2: [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36],\r\n 3: [37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]}\r\n\r\n# Layers map for 8 x Tesla v100 GPUs\r\nlayers_map = {0: [0, 1, 2, 3, 4],\r\n 1: [5, 6, 7, 8, 9, 10],\r\n 2: [11, 12, 13, 14, 15, 16],\r\n 3: [17, 18, 19, 21, 22, 23],\r\n 4: [24, 25, 26, 27, 28, 29],\r\n 5: [30, 31, 32, 33, 34, 35],\r\n 6: [36, 37, 38, 39, 40, 41],\r\n 7: [, 42, 43, 44, 45, 46, 47]}\r\n\r\nmodel = TransformersModel(layers_map = layers_map)\r\n```", "Got it working. The `TrainingArguments` object has data parallelism baked into it (along with a lot of other things), so my manual override of the batch size was failing. The tensor size was exploding because `TrainingArguments` was automatically adjusting the minimum batch size to be the number of tensors. Fine-tuned a gpt2-xl model with 1024 tokens with good results in just 15 minutes.", "@alexorona Can you please share code of some example(s) of parallelisms you get to work (maybe through PR to repo examples)?", "@patrickvonplaten @LSinev Greatly simplified the working code and refined so that the same basic approach can be used for other models as well. I took at look at T5 and 99% confident I can use the same approach to make it parallelizable. Will get a PR up this week, probably by Sunday.", "[Model parallel PR](https://github.com/huggingface/transformers/pull/8696) merged to transformers." ]
1,601
1,610
1,606
CONTRIBUTOR
null
# ❓ Questions & Help I've managed to get model parallelism working on `gpt2` for forward inference by modifying the `GPT2Model` class and adding a few lines to the `generate` method to ensure that tensors that need to be on the same device always are. It automatically distributes the blocks evenly across any number of GPUs that are detected. I had to add an additional argument to `Trainer` (`model_parallel`) to avoid conflicting distribute behavior. Unfortunately, I'm stuck on backprop, specifically in `Trainer.training_step` on the line `loss.backward()`. loss is `tensor(71.5152, device='cuda:3', grad_fn=<NllLossBackward>)` The error is: ``` RuntimeError: expected device cuda:3 but got device cuda:0 (compute_types at ..\aten\src\ATen\native\TensorIterator.cpp:246) (no backtrace available) ``` So something somewhere is on the wrong device. It would be a miracle if someone knows how to fix this, but more realistically I'm hoping for a list of things that might be wrong which I can check. Can do a code review with someone from the transformers team. This could be the pattern to enable model parallelism on all PyTorch transformers.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7526/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7526/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7525
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7525/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7525/comments
https://api.github.com/repos/huggingface/transformers/issues/7525/events
https://github.com/huggingface/transformers/pull/7525
713,180,242
MDExOlB1bGxSZXF1ZXN0NDk2NTQzMzYy
7,525
Fix post_init of some TrainingArguments
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,601
1,601
COLLABORATOR
null
# What does this PR do? The `HFArgumentParser` doesn't actually support bools that are None and wants them `True` or `False`. Therefore, some changes I made to a few fields of `TrainingArguments` do not work when invoked on the command line. This PR fixes that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7525/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7525", "html_url": "https://github.com/huggingface/transformers/pull/7525", "diff_url": "https://github.com/huggingface/transformers/pull/7525.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7525.patch", "merged_at": 1601903957000 }
https://api.github.com/repos/huggingface/transformers/issues/7524
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7524/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7524/comments
https://api.github.com/repos/huggingface/transformers/issues/7524/events
https://github.com/huggingface/transformers/issues/7524
713,139,662
MDU6SXNzdWU3MTMxMzk2NjI=
7,524
Training loss suddenly increases and stays the same
{ "login": "gungor2", "id": 22436319, "node_id": "MDQ6VXNlcjIyNDM2MzE5", "avatar_url": "https://avatars.githubusercontent.com/u/22436319?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gungor2", "html_url": "https://github.com/gungor2", "followers_url": "https://api.github.com/users/gungor2/followers", "following_url": "https://api.github.com/users/gungor2/following{/other_user}", "gists_url": "https://api.github.com/users/gungor2/gists{/gist_id}", "starred_url": "https://api.github.com/users/gungor2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gungor2/subscriptions", "organizations_url": "https://api.github.com/users/gungor2/orgs", "repos_url": "https://api.github.com/users/gungor2/repos", "events_url": "https://api.github.com/users/gungor2/events{/privacy}", "received_events_url": "https://api.github.com/users/gungor2/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hello, you would probably have more answers if you asked this question on the forums: https://discuss.huggingface.co", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "> # ❓ 问题与帮助\r\n> ## 细节\r\n> 我正在尝试开发语言模型。该代码是[此处](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py)给出的示例的修改版本。\r\n> \r\n> 问题在于,在训练过程中的某个时刻,训练损失激增并保持不变。Eval loss 也遵循这一趋势。我附上张量板图以供参考。\r\n> \r\n> 我正在使用变压器 = 3.2.0。我使用transformers=3.3.0 进行了测试并观察到了同样的问题。\r\n> \r\n> [TensorBoard.pdf](https://github.com/huggingface/transformers/files/5314434/TensorBoard.pdf)\r\n\r\nI also encountered this problem, is it solved?", "Hi, I encountered similar issue when pretraining BERT. Have you solved this problem? Could you please share some insights?" ]
1,601
1,687
1,607
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I am trying to develop a language model. The code is modified version of the example given [here](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py). The problem is that at some point during the training, the training loss spikes up and stays the same. Eval loss follows this trend as well. I am attaching tensorboard plots for reference. I am using transformers=3.2.0. I tested with transformers=3.3.0 and observed the same issue. [TensorBoard.pdf](https://github.com/huggingface/transformers/files/5314434/TensorBoard.pdf)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7524/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7524/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7523
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7523/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7523/comments
https://api.github.com/repos/huggingface/transformers/issues/7523/events
https://github.com/huggingface/transformers/pull/7523
713,118,440
MDExOlB1bGxSZXF1ZXN0NDk2NDkxNzIy
7,523
Cleanup documentation for BART, Marian, MBART and Pegasus
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,601
1,601
COLLABORATOR
null
# What does this PR do? This is a follow-up from #7345 to finishing cleaning up the documentation for all models. Nothing of importance apart from the configurations of the 4 classes (BART, Marian, MBART and Pegasus) that can't have the same docstrings with a choose your own adventure default. They all need their own docstrings, since the defaults are different for a lot of values (which also means they all need their own implementation and not subclass the same config or at least pass the arguments with the proper defaults to the superclass). I tried my best to document the actual default for each, but may missed a few fields.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7523/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7523", "html_url": "https://github.com/huggingface/transformers/pull/7523", "diff_url": "https://github.com/huggingface/transformers/pull/7523.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7523.patch", "merged_at": 1601886133000 }
https://api.github.com/repos/huggingface/transformers/issues/7522
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7522/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7522/comments
https://api.github.com/repos/huggingface/transformers/issues/7522/events
https://github.com/huggingface/transformers/pull/7522
713,111,745
MDExOlB1bGxSZXF1ZXN0NDk2NDg2MDMz
7,522
[s2s] Adafactor support for builtin trainer
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,601
1,601
1,601
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7522/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7522/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7522", "html_url": "https://github.com/huggingface/transformers/pull/7522", "diff_url": "https://github.com/huggingface/transformers/pull/7522.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7522.patch", "merged_at": 1601587666000 }